Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft
NASA Astrophysics Data System (ADS)
Rasotto, M.; Armellin, R.; Di Lizia, P.
2016-03-01
An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.
An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm
Lu, Guangquan; Xiong, Ying; Wang, Yunpeng
2016-01-01
The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732
Faries, Kaitlyn M; Kressel, Lucas L; Dylla, Nicholas P; Wander, Marc J; Hanson, Deborah K; Holten, Dewey; Laible, Philip D; Kirmaier, Christine
2016-02-01
Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P(+)QB(-) yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P(+)QB(-) are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100 fs to 50 μs. The resulting ranking of mutants for their yield of P(+)QB(-) from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P(+)HB(-)→P(+)QB(-) electron transfer or initial P*→P(+)HB(-) conversion highlight unmet challenges of optimizing both processes simultaneously. PMID:26658355
Sampling optimization for printer characterization by greedy search.
Morovic, Ján; Arnabat, Jordi; Richard, Yvan; Albarrán, Angel
2010-10-01
Printer color characterization, e.g., in the form of an ICC output profile or other proprietary mechanism linking printer RGB/CMYK inputs to resulting colorimetry, is fundamental to a printing system delivering output that is acceptable to its recipients. Due to the inherently nonlinear and complex relationship between a printing system's inputs and the resulting color output, color characterization typically requires a large sample of printer inputs (e.g., RGB/CMYK) and corresponding color measurements of printed output. Simple sampling techniques here lead to inefficiency and a low return for increases in sampling density. While effective solutions have been proposed to this problem very recently, they either do not exploit the full possibilities of the 3-D/4-D space being sampled or they make assumptions about the underlying relationship being sampled . The approach presented here does not make assumptions beyond those inherent in the subsequent tessellation and interpolation applied to the resulting samples. Instead, the tradeoff here is the great computational cost of the initial optimization, which, however, only needs to be performed during the printing system's engineering and is transparent to its end users. Results show a significant reduction in the number of samples needed to match a given level of color accuracy.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
Near-Optimal Multi-user Greedy Bit-Loading for Digital Subscriber Lines
NASA Astrophysics Data System (ADS)
McKinley, Alastair; Marshall, Alan
This work presents a new algorithm for Dynamic Spectrum Management (DSM) in Digital Subscriber Lines. Previous approaches have achieved high performance by attempting to directly solve or approximate the multiuser spectrum optimisation problem. These methods suffer from a high or intractable computational complexity for even a moderate number of DSL lines. A new method is proposed which is a heuristic extension of the single user greedy algorithm applied to the multi-user case. The new algorithm incorporates a novel cost function that penalises crosstalk as well as considering the usefulness of a tone. Previous work has proved the performance of the new algorithm in simple 2-user scenarios. In this work we present new results which demonstrate the performance of the algorithm in larger DSL bundles. Simulation results are presented and it is shown that the new method achieves results within a few percent of the optimal solution for these scenarios.
Zhang, Huaguang; Wei, Qinglai; Luo, Yanhong
2008-08-01
In this paper, we aim to solve the infinite-time optimal tracking control problem for a class of discrete-time nonlinear systems using the greedy heuristic dynamic programming (HDP) iteration algorithm. A new type of performance index is defined because the existing performance indexes are very difficult in solving this kind of tracking problem, if not impossible. Via system transformation, the optimal tracking problem is transformed into an optimal regulation problem, and then, the greedy HDP iteration algorithm is introduced to deal with the regulation problem with rigorous convergence analysis. Three neural networks are used to approximate the performance index, compute the optimal control policy, and model the nonlinear system for facilitating the implementation of the greedy HDP iteration algorithm. An example is given to demonstrate the validity of the proposed optimal tracking control scheme.
NASA Technical Reports Server (NTRS)
Manacher, G. K.; Zobrist, A. L.
1979-01-01
The paper addresses the problem of how to find the Greedy Triangulation (GT) efficiently in the average case. It is noted that the problem is open whether there exists an efficient approximation algorithm to the Optimum Triangulation. It is first shown how in the worst case, the GT may be obtained in time O(n to the 3) and space O(n). Attention is then given to how the algorithm may be slightly modified to produce a time O(n to the 2), space O(n) solution in the average case. Finally, it is mentioned that Gilbert has found a worst case solution using totally different techniques that require space O(n to the 2) and time O(n to the 2 log n).
Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.
2012-11-15
Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).
Automatic Synthesis Of Greedy Programs
NASA Astrophysics Data System (ADS)
Bhansali, Sanjay; Miriyala, Kanth; Harandi, Mehdi T.
1989-03-01
This paper describes a knowledge based approach to automatically generate Lisp programs using the Greedy method of algorithm design. The system's knowledge base is composed of heuristics for recognizing problems amenable to the Greedy method and knowledge about the Greedy strategy itself (i.e., rules for local optimization, constraint satisfaction, candidate ordering and candidate selection). The system has been able to generate programs for a wide variety of problems including the job-scheduling problem, the 0-1 knapsack problem, the minimal spanning tree problem, and the problem of arranging files on tape to minimize access time. For the special class of problems called matroids, the synthesized program provides optimal solutions, whereas for most other problems the solutions are near-optimal.
48 CFR 15.202 - Advisory multi-step process.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Advisory multi-step... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... participate in the acquisition. This process should not be used for multi-step acquisitions where it...
Coutu, Diane L
2003-02-01
Americans are outraged at the greediness of Wall Street analysts, dot-com entrepreneurs, and, most of all, chief executive officers. How could Tyco's Dennis Kozlowski use company funds to throw his wife a million-dollar birthday bash on an Italian island? How could Enron's Ken Lay sell thousands of shares of his company's once high-flying stock just before it crashed, leaving employees with nothing? Even America's most popular domestic guru, Martha Stewart, is suspected of having her hand in the cookie jar. To some extent, our outrage may be justified, writes HBR senior editor Diane Coutu. And yet, it's easy to forget that just a couple years ago these same people were lauded as heroes. Many Americans wanted nothing more, in fact, than to emulate them, to share in their fortunes. Indeed, we spent an enormous amount of time talking and thinking about double-digit returns, IPOs, day trading, and stock options. It could easily be argued that it was public indulgence in corporate money lust that largely created the mess we're now in. It's time to take a hard look at greed, both in its general form and in its peculiarly American incarnation, says Coutu. If Federal Reserve Board chairman Alan Greenspan was correct in telling Congress that "infectious greed" contaminated U.S. business, then we need to try to understand its causes--and how the average American may have contributed to it. Why did so many of us fall prey to greed? With a deep, almost reflexive trust in the free market, are Americans somehow greedier than other peoples? And as we look at the wreckage from the 1990s, can we be sure it won't happen again? PMID:12577651
48 CFR 15.202 - Advisory multi-step process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Advisory multi-step... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... submitted and the criteria that will be used in making the initial evaluation. Information sought may...
Stochastic seismic inversion using greedy annealed importance sampling
NASA Astrophysics Data System (ADS)
Xue, Yang; Sen, Mrinal K.
2016-10-01
A global optimization method called very fast simulated annealing (VFSA) inversion has been applied to seismic inversion. Here we address some of the limitations of VFSA by developing a new stochastic inference method, named greedy annealed importance sampling (GAIS). GAIS combines VFSA and greedy importance sampling (GIS), which uses a greedy search in the important regions located by VFSA, in order to attain fast convergence and provide unbiased estimation. We demonstrate the performance of GAIS with application to seismic inversion of field post- and pre-stack datasets. The results indicate that GAIS can improve lateral continuity of the inverted impedance profiles and provide better estimation of uncertainties than using VFSA alone. Thus this new hybrid method combining global and local optimization methods can be applied in seismic reservoir characterization and reservoir monitoring for accurate estimation of reservoir models and their uncertainties.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Info-Greedy Sequential Adaptive Compressed Sensing
NASA Astrophysics Data System (ADS)
Braun, Gabor; Pokutta, Sebastian; Xie, Yao
2015-06-01
We present an information-theoretic framework for sequential adaptive compressed sensing, Info-Greedy Sensing, where measurements are chosen to maximize the extracted information conditioned on the previous measurements. We show that the widely used bisection approach is Info-Greedy for a family of $k$-sparse signals by connecting compressed sensing and blackbox complexity of sequential query algorithms, and present Info-Greedy algorithms for Gaussian and Gaussian Mixture Model (GMM) signals, as well as ways to design sparse Info-Greedy measurements. Numerical examples demonstrate the good performance of the proposed algorithms using simulated and real data: Info-Greedy Sensing shows significant improvement over random projection for signals with sparse and low-rank covariance matrices, and adaptivity brings robustness when there is a mismatch between the assumed and the true distributions.
DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL
WIENKE,H.; CAPOTE, R.; HERMAN, M.; SIN, M.
2007-04-22
The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the {sup 232}Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations.
Multi-step pancreatic carcinogenesis and its clinical implications.
Sakorafas, G H; Tsiotou, A G
1999-12-01
The poor prognosis of pancreatic cancer relates mainly to its delayed diagnosis. It has been repeatedly shown that earlier diagnosis of pancreatic cancer is associated with a better outcome. Molecular diagnostic methods (mainly detection of K-ras mutations in pure pancreatic or duodenal juice, on specimens obtained by percutaneous fine-needle aspirations or in stool specimens) can achieve earlier diagnosis in selected subgroups of patients, such as patients with chronic pancreatitis (especially hereditary), adults with recent onset of non-insulin-dependent diabetes mellitus and patients with some inherited disorders that predispose to the development of pancreatic cancer. There is increasing evidence that pancreatic carcinogenesis is a multi-step phenomenon. Screening procedures for precursor lesions in these selected subgroups of patients may reduce the incidence and mortality from pancreatic cancer.
Research on processing medicinal herbs with multi-steps infrared macro-fingerprint method
NASA Astrophysics Data System (ADS)
Yu, Lu; Sun, Su-Qin; Fan, Ke-Feng; Zhou, Qun; Noda, Isao
2005-11-01
How to apply rapid and effective method to research medicinal herbs, the representative of complicated mixture system, is the current study focus for analysts. The functions of non-processed and processed medicinal herbs are greatly different, so controlling the processing procedure is highly important for guarantee of the curative effect. Almost, the conventional criteria of processing are based on personal sensory experience. There is no scientific and impersonal benchmark. In this article, we take Rehmannia for example, conducting a systematic study on the process of braising Rehmannia with yellow wine by using the multi-steps infrared (IR) macro-fingerprint method. The method combines three steps: conventional Fourier transform infrared spectroscopy (FT-IR), second derivative spectroscopy, and two-dimensional infrared (2D-IR) correlation spectroscopy. Based on the changes in different types of IR spectra during the process, we can infer the optimal end-point of processing Rehmannia and the main transformations during the process. The result provides a scientific explanation to the traditional sensory experience based recipe: the end-point product is "dark as night and sweet as malt sugar". In conclusion, the multi-steps IR macro-fingerprint method, which is rapid and reasonable, can play an important role in controlling the processing of medicinal herbs.
Research on processing medicinal herbs with multi-steps infrared macro-fingerprint method.
Yu, Lu; Sun, Su-Qin; Fan, Ke-Feng; Zhou, Qun; Noda, Isao
2005-11-01
How to apply rapid and effective method to research medicinal herbs, the representative of complicated mixture system, is the current study focus for analysts. The functions of non-processed and processed medicinal herbs are greatly different, so controlling the processing procedure is highly important for guarantee of the curative effect. Almost, the conventional criteria of processing are based on personal sensory experience. There is no scientific and impersonal benchmark. In this article, we take Rehmannia for example, conducting a systematic study on the process of braising Rehmannia with yellow wine by using the multi-steps infrared (IR) macro-fingerprint method. The method combines three steps: conventional Fourier transform infrared spectroscopy (FT-IR), second derivative spectroscopy, and two-dimensional infrared (2D-IR) correlation spectroscopy. Based on the changes in different types of IR spectra during the process, we can infer the optimal end-point of processing Rehmannia and the main transformations during the process. The result provides a scientific explanation to the traditional sensory experience based recipe: the end-point product is "dark as night and sweet as malt sugar". In conclusion, the multi-steps IR macro-fingerprint method, which is rapid and reasonable, can play an important role in controlling the processing of medicinal herbs.
A simple greedy algorithm for reconstructing pedigrees.
Cowell, Robert G
2013-02-01
This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study. PMID:23164633
Collecting reliable clades using the Greedy Strict Consensus Merger
Böcker, Sebastian
2016-01-01
Supertree methods combine a set of phylogenetic trees into a single supertree. Similar to supermatrix methods, these methods provide a way to reconstruct larger parts of the Tree of Life, potentially evading the computational complexity of phylogenetic inference methods such as maximum likelihood. The supertree problem can be formalized in different ways, to cope with contradictory information in the input. Many supertree methods have been developed. Some of them solve NP-hard optimization problems like the well-known Matrix Representation with Parsimony, while others have polynomial worst-case running time but work in a greedy fashion (FlipCut). Both can profit from a set of clades that are already known to be part of the supertree. The Superfine approach shows how the Greedy Strict Consensus Merger (GSCM) can be used as preprocessing to find these clades. We introduce different scoring functions for the GSCM, a randomization, as well as a combination thereof to improve the GSCM to find more clades. This helps, in turn, to improve the resolution of the GSCM supertree. We find this modifications to increase the number of true positive clades by 18% compared to the currently used Overlap scoring. PMID:27375971
P.I. Steven M. Larson MD Co P.I. Nai-Kong Cheung MD, Ph.D.
2009-09-21
The 4 specific aims of this project are: (1) Optimization of MST to increase tumor uptake; (2) Antigen heterogeneity; (3) Characterization and reduction of renal uptake; and (4) Validation in vivo of optimized MST targeted therapy. This proposal focussed upon optimizing multistep immune targeting strategies for the treatment of cancer. Two multi-step targeting constructs were explored during this funding period: (1) anti-Tag-72 and (2) anti-GD2.
An Experimental Method for the Active Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
Greedy Hypervolume Subset Selection in Low Dimensions.
Guerreiro, Andreia P; Fonseca, Carlos M; Paquete, Luís
2016-01-01
Given a nondominated point set [Formula: see text] of size [Formula: see text] and a suitable reference point [Formula: see text], the Hypervolume Subset Selection Problem (HSSP) consists of finding a subset of size [Formula: see text] that maximizes the hypervolume indicator. It arises in connection with multiobjective selection and archiving strategies, as well as Pareto-front approximation postprocessing for visualization and/or interaction with a decision maker. Efficient algorithms to solve the HSSP are available only for the 2-dimensional case, achieving a time complexity of [Formula: see text]. In contrast, the best upper bound available for [Formula: see text] is [Formula: see text]. Since the hypervolume indicator is a monotone submodular function, the HSSP can be approximated to a factor of [Formula: see text] using a greedy strategy. In this article, greedy [Formula: see text]-time algorithms for the HSSP in 2 and 3 dimensions are proposed, matching the complexity of current exact algorithms for the 2-dimensional case, and considerably improving upon recent complexity results for this approximation problem.
Li, Zhengbang; Zhang, Wei; Pan, Dongdong; Li, Qizhai
2016-01-01
Principal component analysis (PCA) is a useful tool to identify important linear combination of correlated variables in multivariate analysis and has been applied to detect association between genetic variants and human complex diseases of interest. How to choose adequate number of principal components (PCs) to represent the original system in an optimal way is a key issue for PCA. Note that the traditional PCA, only using a few top PCs while discarding the other PCs, might significantly lose power in genetic association studies if all the PCs contain non-ignorable signals. In order to make full use of information from all PCs, Aschard and his colleagues have proposed a multi-step combined PCs method (named mCPC) recently, which performs well especially when several traits are highly correlated. However, the power superiority of mCPC has just been illustrated by simulation, while the theoretical power performance of mCPC has not been studied yet. In this work, we attempt to investigate theoretical properties of mCPC and further propose a novel and efficient strategy to combine PCs. Extensive simulation results confirm that the proposed method is more robust than existing procedures. A real data application to detect the association between gene TRAF1-C5 and rheumatoid arthritis further shows good performance of the proposed procedure. PMID:27189724
Diffusive behavior of a greedy traveling salesman
NASA Astrophysics Data System (ADS)
Lipowski, Adam; Lipowska, Dorota
2011-06-01
Using Monte Carlo simulations we examine the diffusive properties of the greedy algorithm in the d-dimensional traveling salesman problem. Our results show that for d=3 and 4 the average squared distance from the origin
Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2006-01-17
The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint
NASA Astrophysics Data System (ADS)
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-01
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-21
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
Chun, Sung-woo; Kim, Daehong; Kwon, Jihun; Kim, Bongho; Choi, Seonjun; Lee, Seung-Beck
2012-04-01
We have demonstrated the fabrication of sub 30 nm magnetic tunnel junctions (MTJs) with perpendicular magnetic anisotropy. The multi-step ion beam etching (IBE) process performed for 18 min between 45 deg. and 30 deg. , at 500 V combined ion supply voltage, resulted in a 55 nm tall MTJ with 28 nm diameter. We used a negative tone electron beam resist as the hard mask, which maintained its lateral dimension during the IBE, allowing almost vertical pillar side profiles. The measurement results showed a tunnel magneto-resistance ratio of 13% at 1 k{Omega} junction resistance. With further optimization in IBE energy and multi-step etching process, it will be possible to fabricate perpendicularly oriented MTJs for future sub 30 nm non-volatile magnetic memory applications.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Trinh, Philip; Ball, Cameron; Fu, Elain; Yager, Paul
2016-01-01
Most laboratory assays take advantage of multi-step protocols to achieve high performance, but conventional paper-based tests (e.g., lateral flow tests) are generally limited to assays that can be carried out in a single fluidic step. We have developed two-dimensional paper networks (2DPNs) that use materials from lateral flow tests but reconfigure them to enable programming of multi-step reagent delivery sequences. The 2DPN uses multiple converging fluid inlets to control the arrival time of each fluid to a detection zone or reaction zone, and it requires a method to disconnect each fluid source in a corresponding timed sequence. Here, we present a method that allows programmed disconnection of fluid sources required for multi-step delivery. A 2DPN with legs of different lengths is inserted into a shared buffer well, and the dropping fluid surface disconnects each leg at in a programmable sequence. This approach could enable multi-step laboratory assays to be converted into simple point-of-care devices that have high performance yet remain easy to use. PMID:22037591
Mechanical and Metallurgical Evolution of Stainless Steel 321 in a Multi-step Forming Process
NASA Astrophysics Data System (ADS)
Anderson, M.; Bridier, F.; Gholipour, J.; Jahazi, M.; Wanjara, P.; Bocher, P.; Savoie, J.
2016-04-01
This paper examines the metallurgical evolution of AISI Stainless Steel 321 (SS 321) during multi-step forming, a process that involves cycles of deformation with intermediate heat treatment steps. The multi-step forming process was simulated by implementing interrupted uniaxial tensile testing experiments. Evolution of the mechanical properties as well as the microstructural features, such as twins and textures of the austenite and martensite phases, was studied as a function of the multi-step forming process. The characteristics of the Strain-Induced Martensite (SIM) were also documented for each deformation step and intermediate stress relief heat treatment. The results indicated that the intermediate heat treatments considerably increased the formability of SS 321. Texture analysis showed that the effect of the intermediate heat treatment on the austenite was minor and led to partial recrystallization, while deformation was observed to reinforce the crystallographic texture of austenite. For the SIM, an Olson-Cohen equation type was identified to analytically predict its formation during the multi-step forming process. The generated SIM was textured and weakened with increasing deformation.
Use of Chiral Oxazolidinones for a Multi-Step Synthetic Laboratory Module
ERIC Educational Resources Information Center
Betush, Matthew P.; Murphree, S. Shaun
2009-01-01
Chiral oxazolidinone chemistry is used as a framework for an advanced multi-step synthesis lab. The cost-effective and robust preparation of chiral starting materials is presented, as well as the use of chiral auxiliaries in a synthesis scheme that is appropriate for students currently in the second semester of the organic sequence. (Contains 1…
Automated reassembly of file fragmented images using greedy algorithms.
Memon, Nasir; Pal, Anandabrata
2006-02-01
The problem of restoring deleted files from a scattered set of fragments arises often in digital forensics. File fragmentation is a regular occurrence in hard disks, memory cards, and other storage media. As a result, a forensic analyst examining a disk may encounter many fragments of deleted digital files, but is unable to determine the proper sequence of fragments to rebuild the files. In this paper, we investigate the specific case where digital images are heavily fragmented and there is no file table information by which a forensic analyst can ascertain the correct fragment order to reconstruct each image. The image reassembly problem is formulated as a k-vertex disjoint graph problem and reassembly is then done by finding an optimal ordering of fragments. We provide techniques for comparing fragments and describe several algorithms for image reconstruction based on greedy heuristics. Finally, we provide experimental results showing that images can be reconstructed with high accuracy even when there are thousands of fragments and multiple images involved.
Greedy learning of binary latent trees.
Harmeling, Stefan; Williams, Christopher K I
2011-06-01
Inferring latent structures from observations helps to model and possibly also understand underlying data generating processes. A rich class of latent structures is the latent trees, i.e., tree-structured distributions involving latent variables where the visible variables are leaves. These are also called hierarchical latent class (HLC) models. Zhang and Kocka proposed a search algorithm for learning such models in the spirit of Bayesian network structure learning. While such an approach can find good solutions, it can be computationally expensive. As an alternative, we investigate two greedy procedures: the BIN-G algorithm determines both the structure of the tree and the cardinality of the latent variables in a bottom-up fashion. The BIN-A algorithm first determines the tree structure using agglomerative hierarchical clustering, and then determines the cardinality of the latent variables as for BIN-G. We show that even with restricting ourselves to binary trees, we obtain HLC models of comparable quality to Zhang's solutions (in terms of cross-validated log-likelihood), while being generally faster to compute. This claim is validated by a comprehensive comparison on several data sets. Furthermore, we demonstrate that our methods are able to estimate interpretable latent structures on real-world data with a large number of variables. By applying our method to a restricted version of the 20 newsgroups data, these models turn out to be related to topic models, and on data from the PASCAL Visual Object Classes (VOC) 2007 challenge, we show how such treestructured models help us understand how objects co-occur in images. For reproducibility of all experiments in this paper, all code and data sets (or links to data) are available at http://people.kyb.tuebingen.mpg.de/harmeling/code/ltt-1.4.tar.
Teaching multi-step math skills to adults with disabilities via video prompting.
Kellems, Ryan O; Frandsen, Kaitlyn; Hansen, Blake; Gabrielsen, Terisa; Clarke, Brynn; Simons, Kalee; Clements, Kyle
2016-11-01
The purpose of this study was to evaluate the effectiveness of teaching multi-step math skills to nine adults with disabilities in an 18-21 post-high school transition program using a video prompting intervention package. The dependent variable was the percentage of steps completed correctly. The independent variable was the video prompting intervention, which involved several multi-step math calculation skills: (a) calculating a tip (15%), (b) calculating item unit prices, and (c) adjusting a recipe for more or fewer people. Results indicated a functional relationship between the video prompting interventions and prompting package and the percentage of steps completed correctly. 8 out of the 9 adults showed significant gains immediately after receiving the video prompting intervention. PMID:27589151
Region-based multi-step optic disk and cup segmentation from color fundus image
NASA Astrophysics Data System (ADS)
Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan
2013-02-01
Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.
NASA Astrophysics Data System (ADS)
Mitran, T. L.; Melchert, O.; Hartmann, A. K.
2013-12-01
The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are “greedy” in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of “energy” for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρc, resulting from numerically exact and “static” simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems.
Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components
NASA Technical Reports Server (NTRS)
Jung-Kubiak, Cecile (Inventor); Reck, Theodore (Inventor); Chattopadhyay, Goutam (Inventor); Perez, Jose Vicente Siles (Inventor); Lin, Robert H. (Inventor); Mehdi, Imran (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Peralta, Alejandro (Inventor)
2016-01-01
A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.
Photon Production through Multi-step Processes Important in Nuclear Fluorescence Experiments
Hagmann, C; Pruet, J
2006-10-26
The authors present calculations describing the production of photons through multi-step processes occurring when a beam of gamma rays interacts with a macroscopic material. These processes involve the creation of energetic electrons through Compton scattering, photo-absorption and pair production, the subsequent scattering of these electrons, and the creation of energetic photons occurring as these electrons are slowed through Bremsstrahlung emission. Unlike single Compton collisions, during which an energetic photon that is scattered through a large angle loses most of its energy, these multi-step processes result in a sizable flux of energetic photons traveling at large angles relative to an incident photon beam. These multi-step processes are also a key background in experiments that measure nuclear resonance fluorescence by shining photons on a thin foil and observing the spectrum of back-scattered photons. Effective cross sections describing the production of backscattered photons are presented in a tabular form that allows simple estimates of backgrounds expected in a variety of experiments. Incident photons with energies between 0.5 MeV and 8 MeV are considered. These calculations of effective cross sections may be useful for those designing NRF experiments or systems that detect specific isotopes in well-shielded environments through observation of resonance fluorescence.
Estimating unique soil hydraulic parameters for sandy media from multi-step outflow experiments
NASA Astrophysics Data System (ADS)
Il Hwang, Sang; Powers, Susan E.
Estimating unique soil hydraulic parameters is required to provide input for numerical models simulating transient water flow in the vadose zone. In this paper, we analyze the capability of six soil hydraulic functions to provide unique parameter sets for sandy soils from multi-step outflow data. Initial parameter estimates and experimental boundary conditions were explored to determine their affect on the uniqueness of soil hydraulic functions. Of the hydraulic functions tested, the lognormal distribution-Mualem (LDM) function provided the best performance and a unique solution for error-free numerically generated multi-step outflow data. For experimental multi-step outflow data with inherent measurement errors, the LDM function again showed better performance and uniqueness than the van Genuchten-Mualem and Gardner-Mualem functions. In experiments with different boundary conditions, the LDM function provided the best fitting ability, resulting in unique parameter sets when the intrinsic permeability ( k) was fixed at its measured value. The experiment that had a greater number of pneumatic pressure steps, thereby causing a lower flow rate, provided better fitting ability and more unique solutions than faster experiments.
Contaminant source and release history identification in groundwater: a multi-step approach.
Gzyl, G; Zanini, A; Frączek, R; Kura, K
2014-02-01
The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study.
Contaminant source and release history identification in groundwater: A multi-step approach
NASA Astrophysics Data System (ADS)
Gzyl, G.; Zanini, A.; Frączek, R.; Kura, K.
2014-02-01
The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study.
[Study on the identification of ganoderma by multi-steps infrared macro-fingerprint method].
Chen, Xiao-kang; Huang, Dong-lan; Sun, Su-qin; Cao, Jia-jia; Wang, Shao-ling
2010-01-01
Ganoderma lucidum, ganoderma atrum, ganderma tsugae Murr. and ganoderma lipsiense can be discriminated and identified by using multi-steps infrared macro-fingerprint method. The 1D-1R spectra, based on the peaks intensity at 1153 and 1078 cm(-1), which are the fingerprint characteristic peaks of glucoside compounds, show that the content of glucoside compounds of them was in the order of: ganoderma lucidum>ganoderma atrum>ganderma tsugae Murr. >ganoderma lipsiense. Generally, the second derivative IR spectra can clearly enhance the spectra resolution. In the range of 1600-1720 cm(-1), the position and sharpness of characteristics peaks were very different, and it's proved that amino acid peptide compounds of them were different. In the 2D-IR spectra, four of them have the same autopeak at 1100 cm(-1), which is the autopeaks of glucoside, but the number of autopeaks of ganoderma lucidum was 4 and its strongest autopeak was 1040 cm(-1), while 5 autopeaks, 4 autopeaks and 5 autopeaks were for ganoderma atrum, ganderma tsugae Murr. and ganoderma lipsiense respectively, and their strongest autopeaks were 1040, 1139, 1140 and 1134 cm(-1) respectively. The multi-steps infrared maro-fingerprint identification testified that the contents of glucoside compounds and amino acid peptide compounds in these four kinds of ganoderma are different. It's proved that multi-steps infrared maro-fingerprint method can be used to analyze and distinguish ganoderma lucidum, ganoderma atrum, ganderma tsugae Murr. and ganoderma lipsiense.
Application of an Aided System to Multi-Step Deep Drawing Process in the Brass Pieces Manufacturing
NASA Astrophysics Data System (ADS)
Javier Ramírez, Francisco; Domingo, Rosario
2009-11-01
In general, pieces manufacturing procedure, through deep drawing, requires operations that must be carried out in several phases that extend the time and the cost of the process. Material determination, by considering shape, dimensions, mechanical characteristics, etc., can provoke an overdose at estimating proportions with the consequent increase of the manufacturing costs. Furthermore, the processes improvement with its simultaneous reduction of costs, provides to a company a higher profit in competitive markets. Thus, this paper introduces an aided system that allows the technological design of multi-step deep drawing processes, by the optimization of both initial material and process associated costs, and moreover, their application to brass pieces, in particular in CuZn30 alloy (UNS C26000). The aided system considers process technological constraints and pursues a reduction of manufacturing times, by means of the optimization process and fitting. The results show that this system provides, in each stage of the process, a homogenous distribution of the drawing coefficient, thickness reduction, required force and height of the piece, as well as a saving in times.
Multi-step motion planning: Application to free-climbing robots
NASA Astrophysics Data System (ADS)
Bretl, Timothy Wolfe
This dissertation addresses the problem of planning the motion of a multi-limbed robot to "free-climb" vertical rock surfaces. Free-climbing relies on natural features and friction (such as holes or protrusions) rather than special fixtures or tools. It requires strength, but more importantly it requires deliberate reasoning: not only must the robot decide how to adjust its posture to reach the next feature without falling, it must plan an entire sequence of steps, where each one might have future consequences. This process of reasoning is called multi-step planning. A multi-step planning framework is presented for computing non-gaited, free-climbing motions. This framework derives from an analysis of a free-climbing robot's configuration space, which can be decomposed into constraint manifolds associated with each state of contact between the robot and its environment. An understanding of the adjacency between manifolds motivates a two-stage strategy that uses a candidate sequence of steps to direct the subsequent search for motions. Three algorithms are developed to support the framework. The first algorithm reduces the amount of time required to plan each potential step, a large number of which must be considered over an entire multi-step search. It extends the probabilistic roadmap (PRM) approach based on an analysis of the interaction between balance and the topology of closed kinematic chains. The second algorithm addresses a problem with the PRM approach, that it is unable to distinguish challenging steps (which may be critical) from impossible ones. This algorithm detects impossible steps explicitly, using automated algebraic inference and machine learning. The third algorithm provides a fast constraint checker (on which the PRM approach depends), in particular a test of balance at the initially unknown number of sampled configurations associated with each step. It is a method of incremental precomputation, fast because it takes advantage of the sample
Impact of user influence on information multi-step communication in a micro-blog
NASA Astrophysics Data System (ADS)
Wu, Yue; Hu, Yong; He, Xiao-Hai; Deng, Ken
2014-06-01
User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it.
Lin, Shih-Wei; Ying, Kuo-Ching; Wan, Shu-Yen
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295
Avoiding Greediness in Cooperative Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Brust, Matthias R.; Ribeiro, Carlos H. C.; Mesit, Jaruwan
In peer-to-peer networks, peers simultaneously play the role of client and server. Since the introduction of the first file-sharing protocols, peer-to-peer networking currently causes more than 35% of all internet network traffic— with an ever increasing tendency. A common file-sharing protocol that occupies most of the peer-to-peer traffic is the BitTorrent protocol. Although based on cooperative principles, in practice it is doomed to fail if peers behave greedily. In this work-in-progress paper, we model the protocol by introducing the game named Tit-for-Tat Network Termination (T4TNT) that gives an interesting access to the greediness problem of the BitTorrent protocol. Simulations conducted under this model indicate that greediness can be reduced by solely manipulating the underlying peer-to-peer topology.
2013-01-01
Background A fundamental issue in systems biology is how to design simplified mathematical models for describing the dynamics of complex biochemical reaction systems. Among them, a key question is how to use simplified reactions to describe the chemical events of multi-step reactions that are ubiquitous in biochemistry and biophysics. To address this issue, a widely used approach in literature is to use one-step reaction to represent the multi-step chemical events. In recent years, a number of modelling methods have been designed to improve the accuracy of the one-step reaction method, including the use of reactions with time delay. However, our recent research results suggested that there are still deviations between the dynamics of delayed reactions and that of the multi-step reactions. Therefore, more sophisticated modelling methods are needed to accurately describe the complex biological systems in an efficient way. Results This work designs a two-variable model to simplify chemical events of multi-step reactions. In addition to the total molecule number of a species, we first introduce a new concept regarding the location of molecules in the multi-step reactions, which is the second variable to represent the system dynamics. Then we propose a simulation algorithm to compute the probability for the firing of the last step reaction in the multi-step events. This probability function is evaluated using a deterministic model of ordinary differential equations and a stochastic model in the framework of the stochastic simulation algorithm. The efficiency of the proposed two-variable model is demonstrated by the realization of mRNA degradation process based on the experimentally measured data. Conclusions Numerical results suggest that the proposed new two-variable model produces predictions that match the multi-step chemical reactions very well. The successful realization of the mRNA degradation dynamics indicates that the proposed method is a promising approach to
Digital multi-step phase-shifting profilometry for three-dimensional ballscrew surface imaging
NASA Astrophysics Data System (ADS)
Liu, Cheng-Yang; Yen, Tzu-Ping
2016-05-01
A digital multi-step phase-shifting profilometry for three-dimensional (3-D) ballscrew surface imaging is presented. The 3-D digital imaging system is capable of capturing fringe pattern images. The straight fringe patterns generated by software in the computer are projected onto the ballscrew surface by the DLP projector. The distorted fringe patterns are captured by the CCD camera at different detecting directions for reconstruction algorithms. The seven-step phase-shifting algorithm and quality guided path unwrapping algorithm are used to calculate absolute phase at each pixel position. The 3-D calibration method is used to obtain the relationship between the absolute phase map and ballscrew shape. The angular dependence of 3-D shape imaging for ballscrews is analyzed and characterized. The experimental results may provide a novel, fast, and high accuracy imaging system to inspect the surface features of the ballscrew without length limitation for automated optical inspection industry.
A Multi-Step Assessment Scheme for Seismic Network Site Selection in Densely Populated Areas
NASA Astrophysics Data System (ADS)
Plenkers, Katrin; Husen, Stephan; Kraft, Toni
2015-10-01
We developed a multi-step assessment scheme for improved site selection during seismic network installation in densely populated areas. Site selection is a complex process where different aspects (seismic background noise, geology, and financing) have to be taken into account. In order to improve this process, we developed a step-wise approach that allows quantifying the quality of a site by using, in addition to expert judgement and test measurements, two weighting functions as well as reference stations. Our approach ensures that the recording quality aimed for is reached and makes different sites quantitatively comparable to each other. Last but not least, it is an easy way to document the decision process, because all relevant parameters are listed, quantified, and weighted.
The solution of Parrondo’s games with multi-step jumps
NASA Astrophysics Data System (ADS)
Saakian, David B.
2016-04-01
We consider the general case of Parrondo’s games, when there is a finite probability to stay in the current state as well as multi-step jumps. We introduce a modification of the model: the transition probabilities between different games depend on the choice of the game in the previous round. We calculate the rate of capital growth as well as the variance of the distribution, following large deviation theory. The modified model allows higher capital growth rates than in standard Parrondo games for the range of parameters considered in the key articles about these games, and positive capital growth is possible for a much wider regime of parameters of the model.
Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach
2014-01-01
Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population
SMG: Fast scalable greedy algorithm for influence maximization in social networks
NASA Astrophysics Data System (ADS)
Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham
2015-02-01
Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.
Reinforced recurrent neural networks for multi-step-ahead flood forecasts
NASA Astrophysics Data System (ADS)
Chen, Pin-An; Chang, Li-Chiu; Chang, Fi-John
2013-08-01
Considering true values cannot be available at every time step in an online learning algorithm for multi-step-ahead (MSA) forecasts, a MSA reinforced real-time recurrent learning algorithm for recurrent neural networks (R-RTRL NN) is proposed. The main merit of the proposed method is to repeatedly adjust model parameters with the current information including the latest observed values and model's outputs to enhance the reliability and the forecast accuracy of the proposed method. The sequential formulation of the R-RTRL NN is derived. To demonstrate its reliability and effectiveness, the proposed R-RTRL NN is implemented to make 2-, 4- and 6-step-ahead forecasts in a famous benchmark chaotic time series and a reservoir flood inflow series in North Taiwan. For comparison purpose, three comparative neural networks (two dynamic and one static neural networks) were performed. Numerical and experimental results indicate that the R-RTRL NN not only achieves superior performance to comparative networks but significantly improves the precision of MSA forecasts for both chaotic time series and reservoir inflow case during typhoon events with effective mitigation in the time-lag problem.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
Multi-step process for concentrating magnetic particles in waste sludges
Watson, J.L.
1990-07-10
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, John L.
1990-01-01
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.
Exact free vibration of multi-step Timoshenko beam system with several attachments
NASA Astrophysics Data System (ADS)
Farghaly, S. H.; El-Sayed, T. A.
2016-05-01
This paper deals with the analysis of the natural frequencies, mode shapes of an axially loaded multi-step Timoshenko beam combined system carrying several attachments. The influence of system design and the proposed sub-system non-dimensional parameters on the combined system characteristics are the major part of this investigation. The effect of material properties, rotary inertia and shear deformation of the beam system for each span are included. The end masses are elastically supported against rotation and translation at an offset point from the point of attachment. A sub-system having two degrees of freedom is located at the beam ends and at any of the intermediate stations and acts as a support and/or a suspension. The boundary conditions of the ordinary differential equation governing the lateral deflections and slope due to bending of the beam system including the shear force term, due to the sub-system, have been formulated. Exact global coefficient matrices for the combined modal frequencies, the modal shape and for the discrete sub-system have been derived. Based on these formulae, detailed parametric studies of the combined system are carried out. The applied mathematical model is valid for wide range of applications especially in mechanical, naval and structural engineering fields.
ERIC Educational Resources Information Center
Cuenca-Carlino, Yojanna; Freeman-Green, Shaqwana; Stephenson, Grant W.; Hauth, Clara
2016-01-01
Six middle school students identified as having a specific learning disability or at risk for mathematical difficulties were taught how to solve multi-step equations by using the self-regulated strategy development (SRSD) model of instruction. A multiple-probe-across-pairs design was used to evaluate instructional effects. Instruction was provided…
KMeans greedy search hybrid algorithm for biclustering gene expression data.
Das, Shyama; Idicula, Sumam Mary
2010-01-01
Microarray technology demands the development of algorithms capable of extracting novel and useful patterns like biclusters. A bicluster is a submatrix of the gene expression datamatrix such that the genes show highly correlated activities across all conditions in the submatrix. A measure called Mean Squared Residue (MSR) is used to evaluate the coherence of rows and columns within the submatrix. In this paper, the KMeans greedy search hybrid algorithm is developed for finding biclusters from the gene expression data. This algorithm has two steps. In the first step, high quality bicluster seeds are generated using KMeans clustering algorithm. In the second step, these seeds are enlarged by adding more genes and conditions using the greedy strategy. Here, the objective is to find the biclusters with maximum size and the MSR value lower than a given threshold. The biclusters obtained from this algorithm on both the bench mark datasets are of high quality. The statistical significance and biological relevance of the biclusters are verified using gene ontology database.
Greedy adaptive walks on a correlated fitness landscape.
Park, Su-Chan; Neidhart, Johannes; Krug, Joachim
2016-05-21
We study adaptation of a haploid asexual population on a fitness landscape defined over binary genotype sequences of length L. We consider greedy adaptive walks in which the population moves to the fittest among all single mutant neighbors of the current genotype until a local fitness maximum is reached. The landscape is of the rough mount Fuji type, which means that the fitness value assigned to a sequence is the sum of a random and a deterministic component. The random components are independent and identically distributed random variables, and the deterministic component varies linearly with the distance to a reference sequence. The deterministic fitness gradient c is a parameter that interpolates between the limits of an uncorrelated random landscape (c=0) and an effectively additive landscape (c→∞). When the random fitness component is chosen from the Gumbel distribution, explicit expressions for the distribution of the number of steps taken by the greedy walk are obtained, and it is shown that the walk length varies non-monotonically with the strength of the fitness gradient when the starting point is sufficiently close to the reference sequence. Asymptotic results for general distributions of the random fitness component are obtained using extreme value theory, and it is found that the walk length attains a non-trivial limit for L→∞, different from its values for c=0 and c=∞, if c is scaled with L in an appropriate combination.
Two- and multi-step annealing of cereal starches in relation to gelatinization.
Shi, Yong-Cheng
2008-02-13
Two- and multi-step annealing experiments were designed to determine how much gelatinization temperature of waxy rice, waxy barley, and wheat starches could be increased without causing a decrease in gelatinization enthalpy or a decline in X-ray crystallinity. A mixture of starch and excess water was heated in a differential scanning calorimeter (DSC) pan to a specific temperature and maintained there for 0.5-48 h. The experimental approach was first to anneal a starch at a low temperature so that the gelatinization temperature of the starch was increased without causing a decrease in gelatinization enthalpy. The annealing temperature was then raised, but still was kept below the onset gelatinization temperature of the previously annealed starch. When a second- or third-step annealing temperature was high enough, it caused a decrease in crystallinity, even though the holding temperature remained below the onset gelatinization temperature of the previously annealed starch. These results support that gelatinization is a nonequilibrium process and that dissociation of double helices is driven by the swelling of amorphous regions. Small-scale starch slurry annealing was also performed and confirmed the annealing results conducted in DSC pans. A three-phase model of a starch granule, a mobile amorphous phase, a rigid amorphous phase, and a crystalline phase, was used to interpret the annealing results. Annealing seems to be an interplay between a more efficient packing of crystallites in starch granules and swelling of plasticized amorphous regions. There is always a temperature ceiling that can be used to anneal a starch without causing a decrease in crystallinity. That temperature ceiling is starch-specific, dependent on the structure of a starch, and is lower than the original onset gelatinization of a starch.
Aguiar, F C; Segurado, P; Urbanič, G; Cambra, J; Chauvin, C; Ciadamidaro, S; Dörflinger, G; Ferreira, J; Germ, M; Manolaki, P; Minciardi, M R; Munné, A; Papastergiadou, E; Ferreira, M T
2014-04-01
This paper exposes a new methodological approach to solve the problem of intercalibrating river quality national methods when a common metric is lacking and most of the countries share the same Water Framework Directive (WFD) assessment method. We provide recommendations for similar works in future concerning the assessment of ecological accuracy and highlight the importance of a good common ground to make feasible the scientific work beyond the intercalibration. The approach herein presented was applied to highly seasonal rivers of the Mediterranean Geographical Intercalibration Group for the Biological Quality Element Macrophytes. The Mediterranean Group of river macrophytes involved seven countries and two assessment methods with similar acquisition data and assessment concept: the Macrophyte Biological Index for Rivers (IBMR) for Cyprus, France, Greece, Italy, Portugal and Spain, and the River Macrophyte Index (RMI) for Slovenia. Database included 318 sites of which 78 were considered as benchmarks. The boundary harmonization was performed for common WFD-assessment methods (all countries except Slovenia) using the median of the Good/Moderate and High/Good boundaries of all countries. Then, whenever possible, the Slovenian method, RMI was computed for the entire database. The IBMR was also computed for the Slovenian sites and was regressed against RMI in order to check the relatedness of methods (R(2)=0.45; p<0.00001) and to convert RMI boundaries into the IBMR scale. The boundary bias of RMI was computed using direct comparison of classification and the median boundary values following boundary harmonization. The average absolute class differences after harmonization is 26% and the percentage of classifications differing by half of a quality class is also small (16.4%). This multi-step approach to the intercalibration was endorsed by the WFD Regulatory Committee.
Detection of Heterogeneous Small Inclusions by a Multi-Step MUSIC Method
NASA Astrophysics Data System (ADS)
Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni
2014-05-01
In this contribution the problem of detecting and localizing scatterers with small (in terms of wavelength) cross sections by collecting their scattered field is addressed. The problem is dealt with for a two-dimensional and scalar configuration where the background is given as a two-layered cylindrical medium. More in detail, while scattered field data are taken in the outermost layer, inclusions are embedded within the inner layer. Moreover, the case of heterogeneous inclusions (i.e., having different scattering coefficients) is addressed. As a pertinent applicative context we identify the problem of diagnose concrete pillars in order to detect and locate rebars, ducts and other small in-homogeneities that can populate the interior of the pillar. The nature of inclusions influences the scattering coefficients. For example, the field scattered by rebars is stronger than the one due to ducts. Accordingly, it is expected that the more weakly scattering inclusions can be difficult to be detected as their scattered fields tend to be overwhelmed by those of strong scatterers. In order to circumvent this problem, in this contribution a multi-step MUltiple SIgnal Classification (MUSIC) detection algorithm is adopted [1]. In particular, the first stage aims at detecting rebars. Once rebars have been detected, their positions are exploited to update the Green's function and to subtract the scattered field due to their presence. The procedure is repeated until all the inclusions are detected. The analysis is conducted by numerical experiments for a multi-view/multi-static single-frequency configuration and the synthetic data are generated by a FDTD forward solver. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] R. Solimene, A. Dell'Aversano and G. Leone, "MUSIC algorithms for rebar detection," J. of Geophysics and Engineering, vol. 10, pp. 1
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2005-05-03
A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.
Method to Improve Indium Bump Bonding via Indium Oxide Removal Using a Multi-Step Plasma Process
NASA Technical Reports Server (NTRS)
Greer, H. Frank (Inventor); Jones, Todd J. (Inventor); Vasquez, Richard P. (Inventor); Hoenk, Michael E. (Inventor); Dickie, Matthew R. (Inventor); Nikzad, Shouleh (Inventor)
2012-01-01
A process for removing indium oxide from indium bumps in a flip-chip structure to reduce contact resistance, by a multi-step plasma treatment. A first plasma treatment of the indium bumps with an argon, methane and hydrogen plasma reduces indium oxide, and a second plasma treatment with an argon and hydrogen plasma removes residual organics. The multi-step plasma process for removing indium oxide from the indium bumps is more effective in reducing the oxide, and yet does not require the use of halogens, does not change the bump morphology, does not attack the bond pad material or under-bump metallization layers, and creates no new mechanisms for open circuits.
Automated multi-step purification protocol for Angiotensin-I-Converting-Enzyme (ACE).
Eisele, Thomas; Stressler, Timo; Kranz, Bertolt; Fischer, Lutz
2012-12-12
Highly purified proteins are essential for the investigation of the functional and biochemical properties of proteins. The purification of a protein requires several steps, which are often time-consuming. In our study, the Angiotensin-I-Converting-Enzyme (ACE; EC 3.4.15.1) was solubilised from pig lung without additional detergents, which are commonly used, under mild alkaline conditions in a Tris-HCl buffer (50mM, pH 9.0) for 48h. An automation of the ACE purification was performed using a multi-step protocol in less than 8h, resulting in a purified protein with a specific activity of 37Umg(-1) (purification factor 308) and a yield of 23.6%. The automated ACE purification used an ordinary fast-protein-liquid-chromatography (FPLC) system equipped with two additional switching valves. These switching valves were needed for the buffer stream inversion and for the connection of the Superloop™ used for the protein parking. Automated ACE purification was performed using four combined chromatography steps, including two desalting procedures. The purification methods contained two hydrophobic interaction chromatography steps, a Cibacron 3FG-A chromatography step and a strong anion exchange chromatography step. The purified ACE was characterised by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and native-PAGE. The estimated monomer size of the purified glycosylated ACE was determined to be ∼175kDa by SDS-PAGE, with the dimeric form at ∼330kDa as characterised by a native PAGE using a novel activity staining protocol. For the activity staining, the tripeptide l-Phe-Gly-Gly was used as the substrate. The ACE cleaved the dipeptide Gly-Gly, releasing the l-Phe to be oxidised with l-amino acid oxidase. Combined with peroxidase and o-dianisidine, the generated H(2)O(2) stained a brown coloured band. This automated purification protocol can be easily adapted to be used with other protein purification tasks. PMID:23217308
Greedy replica exchange algorithm for heterogeneous computing grids.
Lockhart, Christopher; O'Connor, James; Armentrout, Steven; Klimov, Dmitri K
2015-09-01
Replica exchange molecular dynamics (REMD) has become a valuable tool in studying complex biomolecular systems. However, its application on distributed computing grids is limited by the heterogeneity of this environment. In this study, we propose a REMD implementation referred to as greedy REMD (gREMD) suitable for computations on heterogeneous grids. To decentralize replica management, gREMD utilizes a precomputed schedule of exchange attempts between temperatures. Our comparison of gREMD against standard REMD suggests four main conclusions. First, gREMD accelerates grid REMD simulations by as much as 40 %. Second, gREMD increases CPU utilization rates in grid REMD by up to 60 %. Third, we argue that gREMD is expected to maintain approximately constant CPU utilization rates and simulation wall-clock times with the increase in the number of replicas. Finally, we show that gREMD correctly implements the REMD algorithm and reproduces the conformational ensemble of a short peptide sampled in our previous standard REMD simulations. We believe that gREMD can find its place in large-scale REMD simulations on heterogeneous computing grids. PMID:26311229
Greedy replica exchange algorithm for heterogeneous computing grids.
Lockhart, Christopher; O'Connor, James; Armentrout, Steven; Klimov, Dmitri K
2015-09-01
Replica exchange molecular dynamics (REMD) has become a valuable tool in studying complex biomolecular systems. However, its application on distributed computing grids is limited by the heterogeneity of this environment. In this study, we propose a REMD implementation referred to as greedy REMD (gREMD) suitable for computations on heterogeneous grids. To decentralize replica management, gREMD utilizes a precomputed schedule of exchange attempts between temperatures. Our comparison of gREMD against standard REMD suggests four main conclusions. First, gREMD accelerates grid REMD simulations by as much as 40 %. Second, gREMD increases CPU utilization rates in grid REMD by up to 60 %. Third, we argue that gREMD is expected to maintain approximately constant CPU utilization rates and simulation wall-clock times with the increase in the number of replicas. Finally, we show that gREMD correctly implements the REMD algorithm and reproduces the conformational ensemble of a short peptide sampled in our previous standard REMD simulations. We believe that gREMD can find its place in large-scale REMD simulations on heterogeneous computing grids.
NASA Astrophysics Data System (ADS)
Lai, Zuliang; Xu, Peng; Wu, Peiyi
2009-01-01
Multi-steps infrared spectroscopic methods, including conventional Fourier transform infrared spectroscopy (FT-IR), second derivative spectroscopy and two-dimensional infrared (2D-IR) correlation spectroscopy, have been proved to be effective methods to examine complicated mixture system such as Chinese herbal medicine. The focus of this paper is the investigation on the effect of flowering on the pharmaceutical components of Cistanche tubulosa by using the Multi-steps infrared spectroscopic method. Power-spectrum analysis is applied to improve the resolution of 2D-IR contour maps and much more details of overlapped peaks are detected. According to the results of FT-IR and second derivative spectra, the peak at 1732 cm -1 assigned to C dbnd O is stronger before flowering than that after flowering in the stem, while more C dbnd O groups are found in the top after flowering. The spectra of root change a lot in the process of flowering for the reason that many peaks shift and disappear after flowering. Seven peaks in the spectra of stem, which are assigned to different kinds of glycoside components, are distinguished by Power-spectra in the range of 900-1200 cm -1. The results provide a scientific explanation to the traditional experience that flowering consumes the pharmaceutical components in stem and the seeds absorb some nutrients of stem after flowering. In conclusion, the Multi-steps infrared spectroscopic method combined with Power-spectra is a promising method to investigate the flowering process of C. tubulosa and discriminate various parts of the herbal medicine.
Multi-Step Ka/Ka Dichroic Plate with Rounded Corners for NASA's 34m Beam Waveguide Antenna
NASA Technical Reports Server (NTRS)
Veruttipong, Watt; Khayatian, Behrouz; Hoppe, Daniel; Long, Ezra
2013-01-01
A multi-step Ka/Ka dichroic plate Frequency Selective Surface (FSS structure) is designed, manufactured and tested for use in NASA's Deep Space Network (DSN) 34m Beam Waveguide (BWG) antennas. The proposed design allows ease of manufacturing and ability to handle the increased transmit power (reflected off the FSS) of the DSN BWG antennas from 20kW to 100 kW. The dichroic is designed using HFSS and results agree well with measured data considering the manufacturing tolerances that could be achieved on the dichroic.
Li, Zizheng; Gao, Jinsong; Yang, Haigui; Wang, Tongtong; Wang, Xiaoyi
2015-09-01
Generally, echelle grating ruling is performed on a thick Al film. Consequently, high-quality large-area thick Al films preparation becomes one of the most important factors to realize a high-performance large-size echelle grating. In this paper, we propose a novel multi-step deposition process to improve thick Al films quality. Compared with the traditional single-step deposition process, it is found that the multi-step deposition process can effectively suppress large-size grains growth resulting in a low surface roughness and high internal compactness of thick Al films. The differences between single- and multi-step deposition processes are discussed in detail. By using multi-step deposition process, we prepared high-quality large-area Al films with a thickness more than 10 μm on a 520 mm × 420 mm neoceramic glass substrate.
Wang, Huilin; Wang, Mingjun; Tan, Hao; Li, Yuan; Zhang, Ziding; Song, Jiangning
2014-01-01
X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed ‘PredPPCrys’ using the support vector machine (SVM). Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I). Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II), which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization targets of
Zhu, Hailin; Yang, Jijun; Wan, Qiang; Lin, Liwei; Liao, Jiali; Yang, Yuanyou; Liu, Ning
2015-11-01
Using a multi-step deposition approach, we develop a strategy of homogeneous multilayered (HM) structure to enrich the grain boundary (GB) of sputtered W films. In comparison with the single-layered film, the HM W film is easily controllable for the film GB density. When decreasing the film modulation period t m from 160 nm to 7 nm, the GB density gradually increased from 0.065 nm(-1) to 0.275 nm(-1) without changing the phase structure of the films. Accordingly, the film's electrical resistivity and mechanical hardness, which are related to the GBs, changed from 40.1 μΩ · cm to 75.3 μΩ · cm and from 12.1 GPa to 16.2 GPa, respectively. Detailed analysis showed that the formation of an HM structure is related to the temperature evolution of the film growing surface during the multi-step sputtering process. This study could provide a general engineering approach to enrich film interfaces and allows for the development of thin films with novel microstructures.
NASA Astrophysics Data System (ADS)
Zhang, Dong; Chen, Yangkang; Huang, Weilin; Gan, Shuwei
2016-10-01
Multichannel singular spectrum analysis (MSSA) is an effective approach for simultaneous seismic data reconstruction and denoising. MSSA utilizes truncated singular value decomposition (TSVD) to decompose the noisy signal into a signal subspace and a noise subspace and weighted projection onto convex sets (POCS)-like method to reconstruct the missing data in the appropriately constructed block Hankel matrix at each frequency slice. However, there still exists some residual noise in signal space due to two major factors: the deficiency of traditional TSVD and the iteratively inserted observed noisy data during the process of weighted POCS like iterations. In this paper, we first further extend the recently proposed damped MSSA (DMSSA) for random noise attenuation, which is more powerful in distinguishing between signal and noise, to simultaneous reconstruction and denoising. Then combined with DMSSA, we propose a multi-step strategy, named multi-step damped MSSA (MS-DMSSA), to efficiently reduce the inserted noise during the POCS like iterations, thus can improve the final performance of simultaneous reconstruction and denoising. Application of the MS-DMSSA approach on 3D synthetic and field seismic data demonstrates a better performance compared with the conventional MSSA approach.
Lautenschlager, Karin; Hwang, Chiachi; Ling, Fangqiong; Liu, Wen-Tso; Boon, Nico; Köster, Oliver; Egli, Thomas; Hammes, Frederik
2014-10-01
Indigenous bacterial communities are essential for biofiltration processes in drinking water treatment systems. In this study, we examined the microbial community composition and abundance of three different biofilter types (rapid sand, granular activated carbon, and slow sand filters) and their respective effluents in a full-scale, multi-step treatment plant (Zürich, CH). Detailed analysis of organic carbon degradation underpinned biodegradation as the primary function of the biofilter biomass. The biomass was present in concentrations ranging between 2-5 × 10(15) cells/m(3) in all filters but was phylogenetically, enzymatically and metabolically diverse. Based on 16S rRNA gene-based 454 pyrosequencing analysis for microbial community composition, similar microbial taxa (predominantly Proteobacteria, Planctomycetes, Acidobacteria, Bacteriodetes, Nitrospira and Chloroflexi) were present in all biofilters and in their respective effluents, but the ratio of microbial taxa was different in each filter type. This change was also reflected in the cluster analysis, which revealed a change of 50-60% in microbial community composition between the different filter types. This study documents the direct influence of the filter biomass on the microbial community composition of the final drinking water, particularly when the water is distributed without post-disinfection. The results provide new insights on the complexity of indigenous bacteria colonizing drinking water systems, especially in different biofilters of a multi-step treatment plant.
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Tsai, Meng-Jung
2016-04-01
Accurate multi-step-ahead inflow forecasting during typhoon periods is extremely crucial for real-time reservoir flood control. We propose a spatio-temporal lumping of radar rainfall for modeling inflow forecasts to mitigate time-lag problems and improve forecasting accuracy. Spatial aggregation of radar cells is made based on the sub-catchment partitioning obtained from the Self-Organizing Map (SOM), and then flood forecasting is made by the Adaptive Neuro Fuzzy Inference System (ANFIS) models coupled with a 2-staged Gamma Test (2-GT) procedure that identifies the optimal non-trivial rainfall inputs. The Shihmen Reservoir in northern Taiwan is used as a case study. The results show that the proposed methods can, in general, precisely make 1- to 4-hour-ahead forecasts and the lag time between predicted and observed flood peaks could be mitigated. The constructed ANFIS models with only two fuzzy if-then rules can effectively categorize inputs into two levels (i.e. high and low) and provide an insightful view (perspective) of the rainfall-runoff process, which demonstrate their capability in modeling the complex rainfall-runoff process. In addition, the confidence level of forecasts with acceptable error can reach as high as 97% at horizon t+1 and 77% at horizon t+4, respectively, which evidently promotes model reliability and leads to better decisions on real-time reservoir operation during typhoon events.
MAP Support Detection for Greedy Sparse Signal Recovery Algorithms in Compressive Sensing
NASA Astrophysics Data System (ADS)
Lee, Namyoon
2016-10-01
A reliable support detection is essential for a greedy algorithm to reconstruct a sparse signal accurately from compressed and noisy measurements. This paper proposes a novel support detection method for greedy algorithms, which is referred to as "\\textit{maximum a posteriori (MAP) support detection}". Unlike existing support detection methods that identify support indices with the largest correlation value in magnitude per iteration, the proposed method selects them with the largest likelihood ratios computed under the true and null support hypotheses by simultaneously exploiting the distributions of sensing matrix, sparse signal, and noise. Leveraging this technique, MAP-Matching Pursuit (MAP-MP) is first presented to show the advantages of exploiting the proposed support detection method, and a sufficient condition for perfect signal recovery is derived for the case when the sparse signal is binary. Subsequently, a set of iterative greedy algorithms, called MAP-generalized Orthogonal Matching Pursuit (MAP-gOMP), MAP-Compressive Sampling Matching Pursuit (MAP-CoSaMP), and MAP-Subspace Pursuit (MAP-SP) are presented to demonstrate the applicability of the proposed support detection method to existing greedy algorithms. From empirical results, it is shown that the proposed greedy algorithms with highly reliable support detection can be better, faster, and easier to implement than basis pursuit via linear programming.
A sub-space greedy search method for efficient Bayesian Network inference.
Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing
2011-09-01
Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology.
Convex dynamics: Unavoidable difficulties in bounding some greedy algorithms
NASA Astrophysics Data System (ADS)
Nowicki, Tomasz; Tresser, Charles
2004-03-01
A greedy algorithm for scheduling and digital printing with inputs in a convex polytope, and vertices of this polytope as successive outputs, has recently been proven to be bounded for any convex polytope in any dimension. This boundedness property follows readily from the existence of some invariant region for a dynamical system equivalent to the algorithm, which is what one proves. While the proof, and some constructions of invariant regions that can be made to depend on a single parameter, are reasonably simple for convex polygons in the plane, the proof of boundedness gets quite complicated in dimension three and above. We show here that such complexity is somehow justified by proving that the most natural generalization of the construction that works for polygons does not work in any dimension above two, even if we allow for as many parameters as there are faces. We first prove that some polytopes in dimension greater than two admit no invariant region to which they are combinatorially equivalent. We then modify these examples to get polytopes such that no invariant region can be obtained by pushing out the borders of the half spaces that intersect to form the polytope. We also show that another mechanism prevents some simplices (the simplest polytopes in any dimension) from admitting invariant regions to which they would be similar. By contrast in dimension two, one can always get an invariant region by pushing these borders far enough in some correlated way; for instance, pushing all borders by the same distance builds an invariant region for any polygon if the push is at a distance big enough for that polygon. To motivate the examples that we provide, we discuss briefly the bifurcations of polyhedra associated with pushing half spaces in parallel to themselves. In dimension three, the elementary codimension one bifurcation resembles the unfolding of the elementary degenerate singularity for codimension one foliations on surfaces. As the subject of this
Convex dynamics: unavoidable difficulties in bounding some greedy algorithms.
Nowicki, Tomasz; Tresser, Charles
2004-03-01
A greedy algorithm for scheduling and digital printing with inputs in a convex polytope, and vertices of this polytope as successive outputs, has recently been proven to be bounded for any convex polytope in any dimension. This boundedness property follows readily from the existence of some invariant region for a dynamical system equivalent to the algorithm, which is what one proves. While the proof, and some constructions of invariant regions that can be made to depend on a single parameter, are reasonably simple for convex polygons in the plane, the proof of boundedness gets quite complicated in dimension three and above. We show here that such complexity is somehow justified by proving that the most natural generalization of the construction that works for polygons does not work in any dimension above two, even if we allow for as many parameters as there are faces. We first prove that some polytopes in dimension greater than two admit no invariant region to which they are combinatorially equivalent. We then modify these examples to get polytopes such that no invariant region can be obtained by pushing out the borders of the half spaces that intersect to form the polytope. We also show that another mechanism prevents some simplices (the simplest polytopes in any dimension) from admitting invariant regions to which they would be similar. By contrast in dimension two, one can always get an invariant region by pushing these borders far enough in some correlated way; for instance, pushing all borders by the same distance builds an invariant region for any polygon if the push is at a distance big enough for that polygon. To motivate the examples that we provide, we discuss briefly the bifurcations of polyhedra associated with pushing half spaces in parallel to themselves. In dimension three, the elementary codimension one bifurcation resembles the unfolding of the elementary degenerate singularity for codimension one foliations on surfaces. As the subject of this
NASA Astrophysics Data System (ADS)
Tommerup, So/ren; Endelt, Benny; Nielsen, Karl Brian
2013-12-01
This paper investigates process control possibilities obtained from a new tool concept for adaptive blank holder force (BHF) distribution. The investigation concerns the concept's application to a multi-step deep drawing process exemplified by the NUMISHEET2014 benchmark 2: Springback of draw-redraw pan. An actuator system, where several cavities are embedded into the blank holder plate is used. By independently controlling the pressure of hydraulic fluid in these cavities, a controlled deflection of the blank holder plate surface can be achieved whereby the distribution of the BHF can be controlled. Using design of experiments, a full 3-level factorial experiments is conducted with respect to the cavity pressures, and the effects and interactions are evaluated.
Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens
2014-07-01
The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from
A multi-step system for screening and localization of hard exudates in retinal images
NASA Astrophysics Data System (ADS)
Bopardikar, Ajit S.; Bhola, Vishal; Raghavendra, B. S.; Narayanan, Rangavittal
2012-03-01
The number of people being affected by Diabetes mellitus worldwide is increasing at an alarming rate. Monitoring of the diabetic condition and its effects on the human body are therefore of great importance. Of particular interest is diabetic retinopathy (DR) which is a result of prolonged, unchecked diabetes and affects the visual system. DR is a leading cause of blindness throughout the world. At any point of time 25 - 44% of people with diabetes are afflicted by DR. Automation of the screening and monitoring process for DR is therefore essential for efficient utilization of healthcare resources and optimizing treatment of the affected individuals. Such automation would use retinal images and detect the presence of specific artifacts such as hard exudates, hemorrhages and soft exudates (that may appear in the image) to gauge the severity of DR. In this paper, we focus on the detection of hard exudates. We propose a two step system that consists of a screening step that classifies retinal images as normal or abnormal based on the presence of hard exudates and a detection stage that localizes these artifacts in an abnormal retinal image. The proposed screening step automatically detects the presence of hard exudates with a high sensitivity and positive predictive value (PPV ). The detection/localization step uses a k-means based clustering approach to localize hard exudates in the retinal image. Suitable feature vectors are chosen based on their ability to isolate hard exudates while minimizing false detections. The algorithm was tested on a benchmark dataset (DIARETDB1) and was seen to provide a superior performance compared to existing methods. The two-step process described in this paper can be embedded in a tele-ophthalmology system to aid with speedy detection and diagnosis of the severity of DR.
GreedEx: A Visualization Tool for Experimentation and Discovery Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. A.; Debdi, O.; Esteban-Sanchez, N.; Pizarro, C.
2013-01-01
Several years ago we presented an experimental, discovery-learning approach to the active learning of greedy algorithms. This paper presents GreedEx, a visualization tool developed to support this didactic method. The paper states the design goals of GreedEx, makes explicit the major design decisions adopted, and describes its main characteristics…
Multi-step reaction mechanism for F atom interactions with organosilicate glass and SiO x films
NASA Astrophysics Data System (ADS)
Mankelevich, Yuri A.; Voronina, Ekaterina N.; Rakhimova, Tatyana V.; Palov, Alexander P.; Lopaev, Dmitry V.; Zyryanov, Sergey M.; Baklanov, Mikhail R.
2016-09-01
An ab initio approach with the density functional theory (DFT) method was used to study F atom interactions with organosilicate glass (OSG)-based low-k dielectric films. Because of the complexity and significant modifications of the OSG surface structure during the interaction with radicals and etching, a variety of reactions between the surface groups and thermal F atoms can happen. For OSG film etching and damage, we propose a multi-step mechanism based on DFT static and dynamic simulations, which is consistent with the previously reported experimental observations. The important part of the proposed mechanism is the formation of pentavalent Si atoms on the OSG surface due to a quasi-chemisorption of the incident F atoms. The revealed mechanism of F atom incorporation into the OSG matrix explains the experimentally observed phenomena of fast fluorination without significant modification of the chemical structure. We demonstrate that the pentavalent Si states induce the weakening of adjacent Si-O bonds and their breaking under F atom flux. The calculated results allow us to propose a set of elementary chemical reactions of successive removal of CH3 and CH2 groups and fluorinated SiO x matrix etching.
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
Segmenting the Femoral Head and Acetabulum in the Hip Joint Automatically Using a Multi-Step Scheme
NASA Astrophysics Data System (ADS)
Wang, Ji; Cheng, Yuanzhi; Fu, Yili; Zhou, Shengjun; Tamura, Shinichi
We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).
Ragazzi, M; Rada, E C
2012-10-01
In the sector of municipal solid waste management the debate on the performances of conventional and novel thermo-chemical technologies is still relevant. When a plant must be constructed, decision makers often select a technology prior to analyzing the local environmental impact of the available options, as this type of study is generally developed when the design of the plant has been carried out. Additionally, in the literature there is a lack of comparative analyses of the contributions to local air pollution from different technologies. The present study offers a multi-step approach, based on pollutant emission factors and atmospheric dilution coefficients, for a local comparative analysis. With this approach it is possible to check if some assumptions related to the advantages of the novel thermochemical technologies, in terms of local direct impact on air quality, can be applied to municipal solid waste treatment. The selected processes concern combustion, gasification and pyrolysis, alone or in combination. The pollutants considered are both carcinogenic and non-carcinogenic. A case study is presented concerning the location of a plant in an alpine region and its contribution to the local air pollution. Results show that differences among technologies are less than expected. Performances of each technology are discussed in details. PMID:22795304
Multiwavelength Observations of a Slow Raise, Multi-Step X1.6 Flare and the Associated Eruption
NASA Astrophysics Data System (ADS)
Yurchyshyn, V.
2015-12-01
Using multi-wavelength observations we studied a slow rise, multi-step X1.6 flare that began on November 7, 2014 as a localized eruption of core fields inside a δ-sunspot and later engulfed the entire active region. This flare event was associated with formation of two systems of post eruption arcades (PEAs) and several J-shaped flare ribbons showing extremely fine details, irreversible changes in the photospheric magnetic fields, and it was accompanied by a fast and wide coronal mass ejection. Data from the Solar Dynamics Observatory, IRIS spacecraft along with the ground based data from the New Solar Telescope (NST) present evidence that i) the flare and the eruption were directly triggered by a flux emergence that occurred inside a δ--sunspot at the boundary between two umbrae; ii) this event represented an example of an in-situ formation of an unstable flux rope observed only in hot AIA channels (131 and 94Å) and LASCO C2 coronagraph images; iii) the global PEA system spanned the entire AR and was due to global scale reconnection occurring at heights of about one solar radii, indicating on the global spatial and temporal scale of the eruption.
Multi-step reaction mechanism for F atom interactions with organosilicate glass and SiO x films
NASA Astrophysics Data System (ADS)
Mankelevich, Yuri A.; Voronina, Ekaterina N.; Rakhimova, Tatyana V.; Palov, Alexander P.; Lopaev, Dmitry V.; Zyryanov, Sergey M.; Baklanov, Mikhail R.
2016-09-01
An ab initio approach with the density functional theory (DFT) method was used to study F atom interactions with organosilicate glass (OSG)-based low-k dielectric films. Because of the complexity and significant modifications of the OSG surface structure during the interaction with radicals and etching, a variety of reactions between the surface groups and thermal F atoms can happen. For OSG film etching and damage, we propose a multi-step mechanism based on DFT static and dynamic simulations, which is consistent with the previously reported experimental observations. The important part of the proposed mechanism is the formation of pentavalent Si atoms on the OSG surface due to a quasi-chemisorption of the incident F atoms. The revealed mechanism of F atom incorporation into the OSG matrix explains the experimentally observed phenomena of fast fluorination without significant modification of the chemical structure. We demonstrate that the pentavalent Si states induce the weakening of adjacent Si–O bonds and their breaking under F atom flux. The calculated results allow us to propose a set of elementary chemical reactions of successive removal of CH3 and CH2 groups and fluorinated SiO x matrix etching.
A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders
NASA Astrophysics Data System (ADS)
Stamatis, D.; Ermoline, A.; Dreizin, E. L.
2012-12-01
A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.
Choi, Jong-Cheol; Doh, Junsang
2012-12-01
A new method for the high-throughput study of cell spreading dynamics is devised by multi-step microscopy projection photolithography based on a cell-friendly photoresist. By releasing a large number of rounded cells in single cell arrays and monitoring their spreading dynamics by interference reflection microscopy, a large number of cell spreading data can be acquired by a single experiment.
ERIC Educational Resources Information Center
Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.
2014-01-01
The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…
NASA Astrophysics Data System (ADS)
Werisch, Stefan; Lennartz, Franz; Bieberle, Andre
2013-04-01
Dynamic Multi Step Outflow (MSO) experiments serve for the estimation of the parameters from soil hydraulic functions like e.g. the Mualem van Genuchten model. The soil hydraulic parameters are derived from outflow records and corresponding matric potential measurements from commonly a single tensiometer using inverse modeling techniques. We modified the experimental set up allowing for simultaneous measurements of the matric potential with three tensiometers and the water content using a high-resolution gamma-ray densiometry measurement system (Bieberle et al., 2007, Hampel et al., 2007). Different combinations of the measured time series were used for the estimation of effective soil hydraulic properties, representing different degrees of information of the "hydraulic reality" of the sample. The inverse modeling task was solved with the multimethod search algorithm AMALGAM (Vrugt et al., 2007) in combination with the Hydrus1D model (Šimúnek et al., 2008). Subsequently, the resulting effective soil hydraulic parameters allow the simulation of the MSO experiment and the comparison of model results with observations. The results show that the information of a single tensiometer together with the outflow record result in a set of effective soil hydraulic parameters producing an overall good agreement between the simulation and the observation for the location of the tensiometer. Significantly deviating results are obtained for the other tensiometer positions using this parameter set. Inclusion of more information, such as additional matric potential measurements with the according water contents within the optimization procedure lead to different, more representative hydraulic parameters which improved the overall agreement significantly. These findings indicate that more information about the soil hydraulic state variables in space and time are necessary to obtain effective soil hydraulic properties of soil core samples. Bieberle, A., Kronenberg, J., Schleicher, E
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Lu, Ying-Ray; Huang, Eric; Chang, Kai-Yao
2014-09-01
Urban flood control is a crucial task, which commonly faces fast rising peak flows resulting from urbanization. To mitigate future flood damages, it is imperative to construct an on-line accurate model to forecast inundation levels during flood periods. The Yu-Cheng Pumping Station located in Taipei City of Taiwan is selected as the study area. Firstly, historical hydrologic data are fully explored by statistical techniques to identify the time span of rainfall affecting the rise of the water level in the floodwater storage pond (FSP) at the pumping station. Secondly, effective factors (rainfall stations) that significantly affect the FSP water level are extracted by the Gamma test (GT). Thirdly, one static artificial neural network (ANN) (backpropagation neural network-BPNN) and two dynamic ANNs (Elman neural network-Elman NN; nonlinear autoregressive network with exogenous inputs-NARX network) are used to construct multi-step-ahead FSP water level forecast models through two scenarios, in which scenario I adopts rainfall and FSP water level data as model inputs while scenario II adopts only rainfall data as model inputs. The results demonstrate that the GT can efficiently identify the effective rainfall stations as important inputs to the three ANNs; the recurrent connections from the output layer (NARX network) impose more effects on the output than those of the hidden layer (Elman NN) do; and the NARX network performs the best in real-time forecasting. The NARX network produces coefficients of efficiency within 0.9-0.7 (scenario I) and 0.7-0.5 (scenario II) in the testing stages for 10-60-min-ahead forecasts accordingly. This study suggests that the proposed NARX models can be valuable and beneficial to the government authority for urban flood control.
NASA Astrophysics Data System (ADS)
Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Sun, Su-Qin; Gu, Dong-Chen; He, Helen; Xu, Chang-Hua; Wang, Xi-Chang
2016-02-01
BPO is often added to wheat flour as flour improver, but its excessive use and edibility are receiving increasing concern. A multi-step IR macro-fingerprinting was employed to identify BPO in wheat flour and unveil its changes during storage. BPO contained in wheat flour (< 3.0 mg/kg) was difficult to be identified by infrared spectra with correlation coefficients between wheat flour and wheat flour samples contained BPO all close to 0.98. By applying second derivative spectroscopy, obvious differences among wheat flour and wheat flour contained BPO before and after storage in the range of 1500-1400 cm- 1 were disclosed. The peak of 1450 cm- 1 which belonged to BPO was blue shifted to 1453 cm- 1 (1455) which belonged to benzoic acid after one week of storage, indicating that BPO changed into benzoic acid after storage. Moreover, when using two-dimensional correlation infrared spectroscopy (2DCOS-IR) to track changes of BPO in wheat flour (0.05 mg/g) within one week, intensities of auto-peaks at 1781 cm- 1 and 669 cm- 1 which belonged to BPO and benzoic acid, respectively, were changing inversely, indicating that BPO was decomposed into benzoic acid. Moreover, another autopeak at 1767 cm- 1 which does not belong to benzoic acid was also rising simultaneously. By heating perturbation treatment of BPO in wheat flour based on 2DCOS-IR and spectral subtraction analysis, it was found that BPO in wheat flour not only decomposed into benzoic acid and benzoate, but also produced other deleterious substances, e.g., benzene. This study offers a promising method with minimum pretreatment and time-saving to identify BPO in wheat flour and its chemical products during storage in a holistic manner.
Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Sun, Su-Qin; Gu, Dong-Chen; He, Helen; Xu, Chang-Hua; Wang, Xi-Chang
2016-02-01
BPO is often added to wheat flour as flour improver, but its excessive use and edibility are receiving increasing concern. A multi-step IR macro-fingerprinting was employed to identify BPO in wheat flour and unveil its changes during storage. BPO contained in wheat flour (<3.0 mg/kg) was difficult to be identified by infrared spectra with correlation coefficients between wheat flour and wheat flour samples contained BPO all close to 0.98. By applying second derivative spectroscopy, obvious differences among wheat flour and wheat flour contained BPO before and after storage in the range of 1500-1400 cm(-1) were disclosed. The peak of 1450 cm(-1) which belonged to BPO was blue shifted to 1453 cm(-1) (1455) which belonged to benzoic acid after one week of storage, indicating that BPO changed into benzoic acid after storage. Moreover, when using two-dimensional correlation infrared spectroscopy (2DCOS-IR) to track changes of BPO in wheat flour (0.05 mg/g) within one week, intensities of auto-peaks at 1781 cm(-1) and 669 cm(-1) which belonged to BPO and benzoic acid, respectively, were changing inversely, indicating that BPO was decomposed into benzoic acid. Moreover, another autopeak at 1767 cm(-1) which does not belong to benzoic acid was also rising simultaneously. By heating perturbation treatment of BPO in wheat flour based on 2DCOS-IR and spectral subtraction analysis, it was found that BPO in wheat flour not only decomposed into benzoic acid and benzoate, but also produced other deleterious substances, e.g., benzene. This study offers a promising method with minimum pretreatment and time-saving to identify BPO in wheat flour and its chemical products during storage in a holistic manner.
Greedy heuristic algorithm for solving series of eee components classification problems*
NASA Astrophysics Data System (ADS)
Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.
2016-04-01
Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.
Simultaneous Greedy Analysis Pursuit for compressive sensing of multi-channel ECG signals.
Avonds, Yurrit; Liu, Yipeng; Van Huffel, Sabine
2014-01-01
This paper addresses compressive sensing for multi-channel ECG. Compared to the traditional sparse signal recovery approach which decomposes the signal into the product of a dictionary and a sparse vector, the recently developed cosparse approach exploits sparsity of the product of an analysis matrix and the original signal. We apply the cosparse Greedy Analysis Pursuit (GAP) algorithm for compressive sensing of ECG signals. Moreover, to reduce processing time, classical signal-channel GAP is generalized to the multi-channel GAP algorithm, which simultaneously reconstructs multiple signals with similar support. Numerical experiments show that the proposed method outperforms the classical sparse multi-channel greedy algorithms in terms of accuracy and the single-channel cosparse approach in terms of processing speed.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S; Brown, Emery N; Purdon, Patrick L
2014-02-15
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
Kew, William; Mitchell, John B O
2015-09-01
The application of Machine Learning to cheminformatics is a large and active field of research, but there exist few papers which discuss whether ensembles of different Machine Learning methods can improve upon the performance of their component methodologies. Here we investigated a variety of methods, including kernel-based, tree, linear, neural networks, and both greedy and linear ensemble methods. These were all tested against a standardised methodology for regression with data relevant to the pharmaceutical development process. This investigation focused on QSPR problems within drug-like chemical space. We aimed to investigate which methods perform best, and how the 'wisdom of crowds' principle can be applied to ensemble predictors. It was found that no single method performs best for all problems, but that a dynamic, well-structured ensemble predictor would perform very well across the board, usually providing an improvement in performance over the best single method. Its use of weighting factors allows the greedy ensemble to acquire a bigger contribution from the better performing models, and this helps the greedy ensemble generally to outperform the simpler linear ensemble. Choice of data preprocessing methodology was found to be crucial to performance of each method too.
GreedyPlus: An Algorithm for the Alignment of Interface Interaction Networks
Law, Brian; Bader, Gary D.
2015-01-01
The increasing ease and accuracy of protein-protein interaction detection has resulted in the ability to map the interactomes of multiple species. We now have an opportunity to compare species to better understand how interactomes evolve. As DNA and protein sequence alignment algorithms were required for comparative genomics, network alignment algorithms are required for comparative interactomics. A number of network alignment methods have been developed for protein-protein interaction networks, where proteins are represented as vertices linked by edges if they interact. Recently, protein interactions have been mapped at the level of amino acid positions, which can be represented as an interface-interaction network (IIN), where vertices represent binding sites, such as protein domains and short sequence motifs. However, current algorithms are not designed to align these networks and generally fail to do so in practice. We present a greedy algorithm, GreedyPlus, for IIN alignment, combining data from diverse sources, including network, protein and binding site properties, to identify putative orthologous relationships between interfaces in available worm and yeast data. GreedyPlus is fast and simple, allowing for easy customization of behaviour, yet still capable of generating biologically meaningful network alignments. PMID:26165520
Emergence of social cohesion in a model society of greedy, mobile individuals.
Roca, Carlos P; Helbing, Dirk
2011-07-12
Human wellbeing in modern societies relies on social cohesion, which can be characterized by high levels of cooperation and a large number of social ties. Both features, however, are frequently challenged by individual self-interest. In fact, the stability of social and economic systems can suddenly break down as the recent financial crisis and outbreaks of civil wars illustrate. To understand the conditions for the emergence and robustness of social cohesion, we simulate the creation of public goods among mobile agents, assuming that behavioral changes are determined by individual satisfaction. Specifically, we study a generalized win-stay-lose-shift learning model, which is only based on previous experience and rules out greenbeard effects that would allow individuals to guess future gains. The most noteworthy aspect of this model is that it promotes cooperation in social dilemma situations despite very low information requirements and without assuming imitation, a shadow of the future, reputation effects, signaling, or punishment. We find that moderate greediness favors social cohesion by a coevolution between cooperation and spatial organization, additionally showing that those cooperation-enforcing levels of greediness can be evolutionarily selected. However, a maladaptive trend of increasing greediness, although enhancing individuals' returns in the beginning, eventually causes cooperation and social relationships to fall apart. Our model is, therefore, expected to shed light on the long-standing problem of the emergence and stability of cooperative behavior.
NASA Astrophysics Data System (ADS)
Shen, M. J.; Wang, X. J.; Ying, T.; Zhang, M. F.; Wu, K.
2016-08-01
The 15 vol.% micron SiC particle (SiCp)-reinforced AZ31B magnesium matrix composite (AZ31B-SiCp) prepared with semisolid stirring-assisted ultrasonic vibration was subjected to a multi-step process. The influence of the multi-step processing route on the microstructure and mechanical properties of the AZ31B-SiCp was investigated. For comparison, the monolithic AZ31B alloy was also processed under the same conditions. The results showed that the grain sizes of the AZ31B alloy and the AZ31B-SiCp were gradually decreased with increasing the processing step. Compared with the AZ31B-SiCp, the grain size of the AZ31B alloy was much larger, and the grain size distribution was inhomogeneous at the same processing condition. The particles of the AZ31B-SiCp were dispersed uniformly through the multi-step processing. Moreover, the tensile properties of the materials were gradually improved with increasing the processing step. In particular, the strength of AZ31B-SiCp and the ductility of AZ31B alloy improved significantly based on the room-temperature tensile test results.
NASA Astrophysics Data System (ADS)
Shen, M. J.; Wang, X. J.; Ying, T.; Zhang, M. F.; Wu, K.
2016-10-01
The 15 vol.% micron SiC particle (SiCp)-reinforced AZ31B magnesium matrix composite (AZ31B-SiCp) prepared with semisolid stirring-assisted ultrasonic vibration was subjected to a multi-step process. The influence of the multi-step processing route on the microstructure and mechanical properties of the AZ31B-SiCp was investigated. For comparison, the monolithic AZ31B alloy was also processed under the same conditions. The results showed that the grain sizes of the AZ31B alloy and the AZ31B-SiCp were gradually decreased with increasing the processing step. Compared with the AZ31B-SiCp, the grain size of the AZ31B alloy was much larger, and the grain size distribution was inhomogeneous at the same processing condition. The particles of the AZ31B-SiCp were dispersed uniformly through the multi-step processing. Moreover, the tensile properties of the materials were gradually improved with increasing the processing step. In particular, the strength of AZ31B-SiCp and the ductility of AZ31B alloy improved significantly based on the room-temperature tensile test results.
An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus
NASA Astrophysics Data System (ADS)
Zhang, Jin-Song; Nakamura, Satoshi
An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.
A Fast Greedy Sparse Method of Current Sources Reconstruction for Ventricular Torsion Detection
NASA Astrophysics Data System (ADS)
Bing, Lu; Jiang, Shiqin; Chen, Mengpei; Zhao, Chen; Grönemeyer, D.; Hailer, B.; Van Leeuwen, P.
2015-09-01
A fast greedy sparse (FGS) method of cardiac equivalent current sources reconstruction is developed for non-invasive detection and quantitative analysis of individual left ventricular torsion. The cardiac magnetic field inverse problem is solved based on a distributed source model. The analysis of real 61-channel magnetocardiogram (MCG) data demonstrates that one or two dominant current source with larger strength can be identified efficiently by the FGS algorithm. Then, the left ventricle torsion during systole is examined on the basis of x, y and z coordination curves and angle change of reconstructed dominant current sources. The advantages of this method are non-invasive, visible, with higher sensitivity and resolution. It may enable the clinical detection of cardiac systolic and ejection dysfunction.
MotifMiner: A Table Driven Greedy Algorithm for DNA Motif Mining
NASA Astrophysics Data System (ADS)
Seeja, K. R.; Alam, M. A.; Jain, S. K.
DNA motif discovery is a much explored problem in functional genomics. This paper describes a table driven greedy algorithm for discovering regulatory motifs in the promoter sequences of co-expressed genes. The proposed algorithm searches both DNA strands for the common patterns or motifs. The inputs to the algorithm are set of promoter sequences, the motif length and minimum Information Content. The algorithm generates subsequences of given length from the shortest input promoter sequence. It stores these subsequences and their reverse complements in a table. Then it searches the remaining sequences for good matches of these subsequences. The Information Content score is used to measure the goodness of the motifs. The algorithm has been tested with synthetic data and real data. The results are found promising. The algorithm could discover meaningful motifs from the muscle specific regulatory sequences.
Price, Jeffery R; Aykac, Deniz; Hunn, John D; Kercher, Andrew K
2007-01-01
We describe new image analysis developments in support of the U.S. Department of Energy's (DOE) Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. We previously reported a non-iterative, Bayesian approach for locating the boundaries of different particle layers in cross-sectional imagery. That method, however, had to be initialized by manual preprocessing where a user must select two points in each image, one indicating the particle center and the other indicating the first layer interface. Here, we describe a technique designed to eliminate the manual preprocessing and provide full automation. With a low resolution image, we use 'EdgeFlow' to approximate the layer boundaries with circular templates. Multiple snakes are initialized to these circles and deformed using a greedy Bayesian strategy that incorporates coupling terms as well as a priori information on the layer thicknesses and relative contrast. We show results indicating the effectiveness of the proposed method.
2011-01-01
Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505
NASA Astrophysics Data System (ADS)
Lin, Chun-Cheng; Tang, Jian-Fu; Su, Hsiu-Hsien; Hong, Cheng-Shong; Huang, Chih-Yu; Chu, Sheng-Yuan
2016-06-01
The multi-step resistive switching (RS) behavior of a unipolar Pt/Li0.06Zn0.94O/Pt resistive random access memory (RRAM) device is investigated. It is found that the RRAM device exhibits normal, 2-, 3-, and 4-step RESET behaviors under different compliance currents. The transport mechanism within the device is investigated by means of current-voltage curves, in-situ transmission electron microscopy, and electrochemical impedance spectroscopy. It is shown that the ion transport mechanism is dominated by Ohmic behavior under low electric fields and the Poole-Frenkel emission effect (normal RS behavior) or Li+ ion diffusion (2-, 3-, and 4-step RESET behaviors) under high electric fields.
Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi
2011-01-01
Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics
Vaisocherová-Lísalová, Hana; Víšová, Ivana; Ermini, Maria Laura; Špringer, Tomáš; Song, Xue Chadtová; Mrázek, Jan; Lamačová, Josefína; Scott Lynn, N; Šedivák, Petr; Homola, Jiří
2016-06-15
Recent outbreaks of foodborne illnesses have shown that foodborne bacterial pathogens present a significant threat to public health, resulting in an increased need for technologies capable of fast and reliable screening of food commodities. The optimal method of pathogen detection in foods should: (i) be rapid, specific, and sensitive; (ii) require minimum sample preparation; and (iii) be robust and cost-effective, thus enabling use in the field. Here we report the use of a SPR biosensor based on ultra-low fouling and functionalizable poly(carboxybetaine acrylamide) (pCBAA) brushes for the rapid and sensitive detection of bacterial pathogens in crude food samples utilizing a three-step detection assay. We studied both the surface resistance to fouling and the functional capabilities of these brushes with respect to each step of the assay, namely: (I) incubation of the sensor with crude food samples, resulting in the capture of bacteria by antibodies immobilized to the pCBAA coating, (II) binding of secondary biotinylated antibody (Ab2) to previously captured bacteria, and (III) binding of streptavidin-coated gold nanoparticles to the biotinylated Ab2 in order to enhance the sensor response. We also investigated the effects of the brush thickness on the biorecognition capabilities of the gold-grafted functionalized pCBAA coatings. We demonstrate that pCBAA-compared to standard low-fouling OEG-based alkanethiolate self-assemabled monolayers-exhibits superior surface resistance regarding both fouling from complex food samples as well as the non-specific binding of S-AuNPs. We further demonstrate that a SPR biosensor based on a pCBAA brush with a thickness as low as 20 nm was capable of detecting E. coli O157:H7 and Salmonella sp. in complex hamburger and cucumber samples with extraordinary sensitivity and specificity. The limits of detection for the two bacteria in cucumber and hamburger extracts were determined to be 57 CFU/mL and 17 CFU/mL for E. coli and 7.4 × 10
NASA Astrophysics Data System (ADS)
Ma, Hui; Zhou, Haijun
2011-05-01
In this brief report we explore the energy landscapes of two spin glass models using a greedy single-spin flipping process, Gmax. The ground-state energy density of the random maximum two-satisfiability problem is efficiently approached by Gmax. The achieved energy density e(t) decreases with the evolution time t as e(t)-e(∞)=h(log10t)-z with a small prefactor h and a scaling coefficient z>1, indicating an energy landscape with deep and rugged funnel-shape regions. For the ±J Viana-Bray spin glass model, however, the greedy single-spin dynamics quickly gets trapped to a local minimal region of the energy landscape.
Shi, Junwei; Cao, Xu; Liu, Fei; Zhang, Bin; Luo, Jianwen; Bai, Jing
2013-03-01
Fluorescence molecular tomography (FMT) is a promising imaging modality that enables three-dimensional visualization of fluorescent targets in vivo in small animals. L2-norm regularization methods are usually used for severely ill-posed FMT problems. However, the smoothing effects caused by these methods result in continuous distribution that lacks high-frequency edge-type features and hence limits the resolution of FMT. In this paper, the sparsity in FMT reconstruction results is exploited via compressed sensing (CS). First, in order to ensure the feasibility of CS for the FMT inverse problem, truncated singular value decomposition (TSVD) conversion is implemented for the measurement matrix of the FMT problem. Then, as one kind of greedy algorithm, an ameliorated stagewise orthogonal matching pursuit with gradually shrunk thresholds and a specific halting condition is developed for the FMT inverse problem. To evaluate the proposed algorithm, we compared it with a TSVD method based on L2-norm regularization in numerical simulation and phantom experiments. The results show that the proposed algorithm can obtain higher spatial resolution and higher signal-to-noise ratio compared with the TSVD method.
Greedy data transportation scheme with hard packet deadlines for wireless ad hoc networks.
Lee, HyungJune
2014-01-01
We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services.
Cai, Chuangjian; Zhang, Lin; Cai, Wenjuan; Zhang, Dong; Lv, Yanlu; Luo, Jianwen
2016-01-01
In order to improve the spatial resolution of time-domain (TD) fluorescence molecular lifetime tomography (FMLT), an accelerated nonlinear orthogonal matching pursuit (ANOMP) algorithm is proposed. As a kind of nonlinear greedy sparsity-constrained methods, ANOMP can find an approximate solution of L0 minimization problem. ANOMP consists of two parts, i.e., the outer iterations and the inner iterations. Each outer iteration selects multiple elements to expand the support set of the inverse lifetime based on the gradients of a mismatch error. The inner iterations obtain an intermediate estimate based on the support set estimated in the outer iterations. The stopping criterion for the outer iterations is based on the stability of the maximum reconstructed values and is robust for problems with targets at different edge-to-edge distances (EEDs). Phantom experiments with two fluorophores at different EEDs and in vivo mouse experiments demonstrate that ANOMP can provide high quantification accuracy, even if the EED is relatively small, and high resolution. PMID:27446648
Computer-Assisted Test Assembly Using Optimization Heuristics.
ERIC Educational Resources Information Center
Leucht, Richard M.
1998-01-01
Presents a variation of a "greedy" algorithm that can be used in test-assembly problems. The algorithm, the normalized weighted absolute-deviation heuristic, selects items to have a locally optimal fit to a moving set of average criterion values. Demonstrates application of the model. (SLD)
NASA Astrophysics Data System (ADS)
Hatta, Kohei; Nakajima, Yohei; Isoda, Erika; Itoh, Mariko; Yamamoto, Tamami
The brain is one of the most complicated structures in nature. Zebrafish is a useful model to study development of vertebrate brain, because it is transparent at early embryonic stage and it develops rapidly outside of the body. We made a series of transgenic zebrafish expressing green-fluorescent protein related molecules, for example, Kaede and KikGR, whose green fluorescence can be irreversibly converted to red upon irradiation with ultra-violet (UV) or violet light, and Dronpa, whose green fluorescence is eliminated with strong blue light but can be reactivated upon irradiation with UV or violet-light. We have recently shown that infrared laser evoked gene operator (IR-LEGO) which causes a focused heat shock could locally induce these fluorescent proteins and the other genes. Neural cell migration and axonal pattern formation in living brain could be visualized by this technique. We also can express channel rhodopsine 2 (ChR2), a photoactivatable cation channel, or Natronomonas pharaonis halorhodopsin (NpHR), a photoactivatable chloride ion pump, locally in the nervous system by IR. Then, behaviors of these animals can be controlled by activating or silencing the local neurons by light. This novel strategy is useful in discovering neurons and circuits responsible for a wide variety of animal behaviors. We proposed to call this method ‘multi-stepped optogenetics’.
Grain refinement in a AlZnMgCuTi alloy by intensive melt shearing: A multi-step nucleation mechanism
NASA Astrophysics Data System (ADS)
Li, H. T.; Xia, M.; Jarry, Ph.; Scamans, G. M.; Fan, Z.
2011-01-01
Direct chill (DC) cast ingots of wrought Al alloys conventionally require the deliberate addition of a grain refiner to provide a uniform as-cast microstructure for the optimisation of both mechanical properties and processability. Grain refiner additions have been in widespread industrial use for more than half a century. Intensive melt shearing can provide grain refinement without the need for a specific grain refiner addition for both magnesium and aluminium based alloys. In this paper we present experimental evidence of the grain refinement in an experimental wrought aluminium alloy achieved by intensive melt shearing in the liquid state prior to solidification. The mechanisms for high shear induced grain refinement are correlated with the evolution of oxides in alloys. The oxides present in liquid aluminium alloys, normally as oxide films and clusters, can be effectively dispersed by intensive shearing and then provide effective sites for the heterogeneous nucleation of Al 3Ti phase. As a result, Al 3Ti particles with a narrower size distribution and hence improved efficiency as active nucleation sites of α-aluminium grains are responsible for the achieved significant grain refinement. This is termed a multi-step nucleation mechanism.
Marquette, Ian; Quesne, Christiane
2014-11-15
Type III multi-step rationally extended harmonic oscillator and radial harmonic oscillator potentials, characterized by a set of k integers m{sub 1}, m{sub 2}, ⋯, m{sub k}, such that m{sub 1} < m{sub 2} < ⋯ < m{sub k} with m{sub i} even (resp. odd) for i odd (resp. even), are considered. The state-adding and state-deleting approaches to these potentials in a supersymmetric quantum mechanical framework are combined to construct new ladder operators. The eigenstates of the Hamiltonians are shown to separate into m{sub k} + 1 infinite-dimensional unitary irreducible representations of the corresponding polynomial Heisenberg algebras. These ladder operators are then used to build a higher-order integral of motion for seven new infinite families of superintegrable two-dimensional systems separable in cartesian coordinates. The finite-dimensional unitary irreducible representations of the polynomial algebras of such systems are directly determined from the ladder operator action on the constituent one-dimensional Hamiltonian eigenstates and provide an algebraic derivation of the superintegrable systems whole spectrum including the level total degeneracies.
Very low-pressure VLP-CVD growth of high quality γ-Al 2O 3 films on silicon by multi-step process
NASA Astrophysics Data System (ADS)
Tan, Liwen; Zan, Yude; Wang, Jun; Wang, Qiyuan; Yu, Yuanhuan; Wang, Shurui; Liu, Zhongli; Lin, Lanying
2002-03-01
γ-Al 2O 3 films were grown on Si (1 0 0) substrates using the sources of TMA (Al(CH 3) 3) and O 2 by very low-pressure chemical vapor deposition. The effects of temperature control on the crystalline quality, surface morphology, uniformity and dielectricity were investigated. It has been found that the γ-Al 2O 3 film prepared at a temperature of 1000°C has a good crystalline quality, but the surface morphology, uniformity and dielectricity were poor due to the etching reaction between O 2 and Si substrate in the initial growth stage. However, under a temperature-varied multi-step process the properties of γ-Al 2O 3 film were improved. The films have a mirror-like surface and the dielectricity was superior to that grown under a single-step process. The uniformity of γ-Al 2O 3 films for 2-in epi-wafer was <5%, it is better than that disclosed elsewhere. In order to improve the crystalline quality, the γ-Al 2O 3 films were annealed for 1 h in O 2 atmosphere.
NASA Astrophysics Data System (ADS)
Prados, A. I.; Gupta, P.; Mehta, A. V.; Schmidt, C.; Blevins, B.; Carleton-Hug, A.; Barbato, D.
2014-12-01
NASA's Applied Remote Sensing Training Program (ARSET), http://arset.gsfc.nasa.gov, within NASA's Applied Sciences Program, has been providing applied remote sensing training since 2008. The goals of the program are to develop the technical and analytical skills necessary to utilize NASA resources for decision-support, and to help end-users navigate through the vast data resources freely available. We discuss our multi-step approach to improving data access and use of NASA satellite and model data for air quality, water resources, disaster, and land management. The program has reached over 1600 participants world wide using a combined online and interactive approach. We will discuss lessons learned as well as best practices and success stories in improving the use of NASA Earth Science resources archived at multiple data centers by end-users in the private and public sectors. ARSET's program evaluation method for improving the program and assessing the benefits of trainings to U.S and international organizations will also be described.
NASA Astrophysics Data System (ADS)
Xu, Rong; Sun, Suqin; Zhu, Weicheng; Xu, Changhua; Liu, Yougang; Shen, Liang; Shi, Yue; Chen, Jun
2014-07-01
The genus Cistanche generally has four species in China, including C. deserticola (CD), C. tubulosa (CT), C. salsa (CS) and C. sinensis (CSN), among which CD and CT are official herbal sources of Cistanche Herba (CH). To clarify the sources of CH and ensure the clinical efficacy and safety, a multi-step IR macro-fingerprint method was developed to analyze and evaluate the ethanol extracts of the four species. Through this method, the four species were distinctively distinguished, and the main active components phenylethanoid glycosides (PhGs) were estimated rapidly according to the fingerprint features in the original IR spectra, second derivative spectra, correlation coefficients and 2D-IR correlation spectra. The exclusive IR fingerprints in the spectra including the positions, shapes and numbers of peaks indicated that constitutes of CD were the most abundant, and CT had the highest level of PhGs. The results deduced by some macroscopic features in IR fingerprint were in agreement with the HPLC fingerprint of PhGs from the four species, but it should be noted that the IR provided more chemical information than HPLC. In conclusion, with the advantages of high resolution, cost effective and speediness, the macroscopic IR fingerprint method should be a promising analytical technique for discriminating extremely similar herbal medicine, monitoring and tracing the constituents of different extracts and even for quality control of the complex systems such as TCM.
NASA Astrophysics Data System (ADS)
Zeb Gul, Jahan; Yang, Bong-Su; Yang, Young Jin; Chang, Dong Eui; Choi, Kyung Hyun
2016-11-01
Soft bots have the expedient ability of adopting intricate postures and fitting in complex shapes compared to mechanical robots. This paper presents a unique in situ UV curing three-dimensional (3D) printed multi-material tri-legged soft bot with spider mimicked multi-step dynamic forward gait using commercial bio metal filament (BMF) as an actuator. The printed soft bot can produce controllable forward motion in response to external signals. The fundamental properties of BMF, including output force, contractions at different frequencies, initial loading rate, and displacement-rate are verified. The tri-pedal soft bot CAD model is designed inspired by spider’s legged structure and its locomotion is assessed by simulating strain and displacement using finite element analysis. A customized rotational multi-head 3D printing system assisted with multiple wavelength’s curing lasers is used for in situ fabrication of tri-pedal soft-bot using two flexible materials (epoxy and polyurethane) in three layered steps. The size of tri-pedal soft-bot is 80 mm in diameter and each pedal’s width and depth is 5 mm × 5 mm respectively. The maximum forward speed achieved is 2.7 mm s‑1 @ 5 Hz with input voltage of 3 V and 250 mA on a smooth surface. The fabricated tri-pedal soft bot proved its power efficiency and controllable locomotion at three input signal frequencies (1, 2, 5 Hz).
Xiong, Hanzhen; Li, Qiulian; Chen, Ruichao; Liu, Shaoyan; Lin, Qiongyan; Xiong, Zhongtang; Jiang, Qingping; Guo, Linlang
2016-01-01
We aimed to identify endometrioid endometrial carcinoma (EEC)-related gene signatures using a multi-step miRNA-mRNA regulatory network construction approach. Pathway analysis showed that 61 genes were enriched on many carcinoma-related pathways. Among the 14 highest scoring gene signatures, six genes had been previously shown to be endometrial carcinoma. By qRT-PCR and next generation sequencing, we found that a gene signature (CPEB1) was significantly down-regulated in EEC tissues, which may be caused by hsa-miR-183-5p up-regulation. In addition, our literature surveys suggested that CPEB1 may play an important role in EEC pathogenesis by regulating the EMT/p53 pathway. The miRNA-mRNA network is worthy of further investigation with respect to the regulatory mechanisms of miRNAs in EEC. CPEB1 appeared to be a tumor suppressor in EEC. Our results provided valuable guidance for the functional study at the cellular level, as well as the EEC mouse models. PMID:27271671
Macías, Francisco; Caraballo, Manuel A; Rötting, Tobias S; Pérez-López, Rafael; Nieto, José Miguel; Ayora, Carlos
2012-09-01
Complete metal removal from highly-polluted acid mine drainage was attained by the use of a pilot multi-step passive remediation system. The remediation strategy employed can conceptually be subdivided into a first section where the complete trivalent metal removal was achieved by the employment of a previously tested limestone-based passive remediation technology followed by the use of a novel reactive substrate (caustic magnesia powder dispersed in a wood shavings matrix) obtaining a total divalent metal precipitation. This MgO-step was capable to abate high concentrations of Zn together with Mn, Cd, Co and Ni below the recommended limits for drinking waters. A reactive transport model anticipates that 1 m(3) of MgO-DAS (1 m thick × 1 m(2) section) would be able to treat a flow of 0.5 L/min of a highly acidic water (total acidity of 788 mg/L CaCO(3)) for more than 3 years. PMID:22819882
Flores, Glenn
2002-07-01
Cinematic depictions of physicians potentially can affect public expectations and the patient-physician relationship, but little attention has been devoted to portrayals of physicians in movies. The objective of the study was the analysis of cinematic depictions of physicians to determine common demographic attributes of movie physicians, major themes, and whether portrayals have changed over time. All movies released on videotape with physicians as main characters and readily available to the public were viewed in their entirety. Data were collected on physician characteristics, diagnoses, and medical accuracy, and dialogue concerning physicians was transcribed. The results showed that in the 131 films, movie physicians were significantly more likely to be male (p < 0.00001), White (p < 0.00001), and < 40 years of age (p < 0.009). The proportion of women and minority film physicians has declined steadily in recent decades. Movie physicians are most commonly surgeons (33%), psychiatrists (26%), and family practitioners (18%). Physicians were portrayed negatively in 44% of movies, and since the 1960s positive portrayals declined while negative portrayals increased. Physicians frequently are depicted as greedy, egotistical, uncaring, and unethical, especially in recent films. Medical inaccuracies occurred in 27% of films. Compassion and idealism were common in early physician movies but are increasingly scarce in recent decades. A recurrent theme is the "mad scientist," the physician-researcher that values research more than patients' welfare. Portrayals of physicians as egotistical and materialistic have increased, whereas sexism and racism have waned. Movies from the past two decades have explored critical issues surrounding medical ethics and managed care. We conclude that negative cinematic portrayals of physicians are on the rise, which may adversely affect patient expectations and the patient-physician relationship. Nevertheless, films about physicians can
Magmatically Greedy Reararc Volcanoes of the N. Tofua Segment of the Tonga Arc
NASA Astrophysics Data System (ADS)
Rubin, K. H.; Embley, R. W.; Arculus, R. J.; Lupton, J. E.
2013-12-01
Volcanism along the northernmost Tofua Arc is enigmatic because edifices of the arc's volcanic front are mostly, magmatically relatively anemic, despite the very high convergence rate of the Pacific Plate with this section of Tonga Arc. However, just westward of the arc front, in terrain generally thought of as part of the adjacent NE Lau Backarc Basin, lie a series of very active volcanoes and volcanic features, including the large submarine caldera Niuatahi (aka volcano 'O'), a large composite dacite lava flow terrain not obviously associated with any particular volcanic edifice, and the Mata volcano group, a series of 9 small elongate volcanoes in an extensional basin at the extreme NE corner of the Lau Basin. These three volcanic terrains do not sit on arc-perpendicular cross chains. Collectively, these volcanic features appear to be receiving a large proportion of the magma flux from the sub-Tonga/Lau mantle wedge, in effect 'stealing' this magma flux from the arc front. A second occurrence of such magma 'capture' from the arc front occurs in an area just to the south, on southernmost portion of the Fonualei Spreading Center. Erupted compositions at these 'magmatically greedy' volcanoes are consistent with high slab-derived fluid input into the wedge (particularly trace element abundances and volatile contents, e.g., see Lupton abstract this session). It is unclear how long-lived a feature this is, but the very presence of such hyperactive and areally-dispersed volcanism behind the arc front implies these volcanoes are not in fact part of any focused spreading/rifting in the Lau Backarc Basin, and should be thought of as 'reararc volcanoes'. Possible tectonic factors contributing to this unusually productive reararc environment are the high rate of convergence, the cold slab, the highly disorganized extension in the adjacent backarc, and the tear in the subducting plate just north of the Tofua Arc.
Flores, Glenn
2002-07-01
Cinematic depictions of physicians potentially can affect public expectations and the patient-physician relationship, but little attention has been devoted to portrayals of physicians in movies. The objective of the study was the analysis of cinematic depictions of physicians to determine common demographic attributes of movie physicians, major themes, and whether portrayals have changed over time. All movies released on videotape with physicians as main characters and readily available to the public were viewed in their entirety. Data were collected on physician characteristics, diagnoses, and medical accuracy, and dialogue concerning physicians was transcribed. The results showed that in the 131 films, movie physicians were significantly more likely to be male (p < 0.00001), White (p < 0.00001), and < 40 years of age (p < 0.009). The proportion of women and minority film physicians has declined steadily in recent decades. Movie physicians are most commonly surgeons (33%), psychiatrists (26%), and family practitioners (18%). Physicians were portrayed negatively in 44% of movies, and since the 1960s positive portrayals declined while negative portrayals increased. Physicians frequently are depicted as greedy, egotistical, uncaring, and unethical, especially in recent films. Medical inaccuracies occurred in 27% of films. Compassion and idealism were common in early physician movies but are increasingly scarce in recent decades. A recurrent theme is the "mad scientist," the physician-researcher that values research more than patients' welfare. Portrayals of physicians as egotistical and materialistic have increased, whereas sexism and racism have waned. Movies from the past two decades have explored critical issues surrounding medical ethics and managed care. We conclude that negative cinematic portrayals of physicians are on the rise, which may adversely affect patient expectations and the patient-physician relationship. Nevertheless, films about physicians can
Flores, Glenn
2002-01-01
Cinematic depictions of physicians potentially can affect public expectations and the patient-physician relationship, but little attention has been devoted to portrayals of physicians in movies. The objective of the study was the analysis of cinematic depictions of physicians to determine common demographic attributes of movie physicians, major themes, and whether portrayals have changed over time. All movies released on videotape with physicians as main characters and readily available to the public were viewed in their entirety. Data were collected on physician characteristics, diagnoses, and medical accuracy, and dialogue concerning physicians was transcribed. The results showed that in the 131 films, movie physicians were significantly more likely to be male (p < 0.00001), White (p < 0.00001), and < 40 years of age (p < 0.009). The proportion of women and minority film physicians has declined steadily in recent decades. Movie physicians are most commonly surgeons (33%), psychiatrists (26%), and family practitioners (18%). Physicians were portrayed negatively in 44% of movies, and since the 1960s positive portrayals declined while negative portrayals increased. Physicians frequently are depicted as greedy, egotistical, uncaring, and unethical, especially in recent films. Medical inaccuracies occurred in 27% of films. Compassion and idealism were common in early physician movies but are increasingly scarce in recent decades. A recurrent theme is the "mad scientist," the physician-researcher that values research more than patients' welfare. Portrayals of physicians as egotistical and materialistic have increased, whereas sexism and racism have waned. Movies from the past two decades have explored critical issues surrounding medical ethics and managed care. We conclude that negative cinematic portrayals of physicians are on the rise, which may adversely affect patient expectations and the patient-physician relationship. Nevertheless, films about physicians can
NASA Astrophysics Data System (ADS)
Yang, J.-S.; Yu, S.-P.; Liu, G.-M.
2013-07-01
In order to increase the accuracy of serial-propagated long-range multi-step-ahead (MSA) prediction, which has high practical value but also great difficulty to conduct because of huge error accumulation, a novel wavelet-NN hybrid model CDW-NN, combining continuous and discrete wavelet transforms (CWT and DWT) and neural networks (NN), is designed as the MSA predictor for effective long-term forecast of hydrological signals. By the application of 12 types of hybrid and pure models in estuarine 1096 day river stage series forecasting, different forecast performances and the superiorities of CDW-NN model with corresponding driving mechanisms are discussed, and one type of CDW-NN model (CDW-NF), which uses Neuro-Fuzzy as the forecast submodel, has been proven to be the most effective MSA predictor for the accuracy enhancement in the overall 1096 days long-term forecast. The special superiority of CDW-NF model lies in the CWT based methodology, which determines the 15 and 28 day prior data series as model inputs by revealing the significant short-time periodicities involved in estuarine river stage signals. Comparing conventional single-step-ahead based long-term forecast models, the CWT based hybrid models broaden the prediction range in each forecast step from 1 day to 15 days, thus reduce the overall forecasting iteration steps from 1096 steps to 74 steps and finally creates significant decrease of error accumulations. In addition, combination of the advantages of DWT method and Neuro-Fuzzy system also very benefit filtering the noisy dynamics for model inputs and enhancing the simulation and forecast ability of the complex hydro-system.
NASA Astrophysics Data System (ADS)
Yang, J.-S.; Yu, S.-P.; Liu, G.-M.
2013-12-01
In order to increase the accuracy of serial-propagated long-range multi-step-ahead (MSA) prediction, which has high practical value but also great implementary difficulty because of huge error accumulation, a novel wavelet neural network hybrid model - CDW-NN - combining continuous and discrete wavelet transforms (CWT and DWT) and neural networks (NNs), is designed as the MSA predictor for the effective long-term forecast of hydrological signals. By the application of 12 types of hybrid and pure models in estuarine 1096-day river stages forecasting, the different forecast performances and the superiorities of CDW-NN model with corresponding driving mechanisms are discussed. One type of CDW-NN model, CDW-NF, which uses neuro-fuzzy as the forecast submodel, has been proven to be the most effective MSA predictor for the prominent accuracy enhancement during the overall 1096-day long-term forecasts. The special superiority of CDW-NF model lies in the CWT-based methodology, which determines the 15-day and 28-day prior data series as model inputs by revealing the significant short-time periodicities involved in estuarine river stage signals. Comparing the conventional single-step-ahead-based long-term forecast models, the CWT-based hybrid models broaden the prediction range in each forecast step from 1 day to 15 days, and thus reduce the overall forecasting iteration steps from 1096 steps to 74 steps and finally create significant decrease of error accumulations. In addition, combination of the advantages of DWT method and neuro-fuzzy system also benefits filtering the noisy dynamics in model inputs and enhancing the simulation and forecast ability for the complex hydro-system.
Abdelwahab, Siddig Ibarhim; El-Setohy, Maged; Alsharqi, Abdalla; Elsanosy, Rashad; Mohammed, Umar Yagoub
2016-01-01
Smoking is accountable for the fatality of a substantial number of persons and increases the likelihood of cancer and cardiovascular diseases. Although data have shown high prevalence rates of cigarette smoking in Saudi Arabia, relatively little is known about the broader scope. The objectives of this study were to investigate socio-demographic factors, patterns of use and cessation behavior associated with smoking in Saudi Arabia (KSA). The study utilized a cross-sectional, multi-step design of sampling. Residents (N=1,497; aged 15 years and older) were recruited from seven administrative areas in Southwest Saudi Arabia. A pretested questionnaire was utilized to obtain data on participant cigarette smoking, including their daily use, age, education, income, marital status and employment status. The current study is the first of its kind to gather data cessation behavior of Saudi subjects. With the exception of 1.5% females, all the respondents were male. The majority of the respondents were married, had a university level of education, were employed, and were younger than 34 years old. The same trends were also observed among smokers' samples. The current prevalence of cigarette smoking was 49.2% and 65.7% of smokers had smoking at less than 18 years of age. The mean daily use amongst smokers was 7.98 cigarettes (SD=4.587). More than 50% of the study sample had tried at least once to quit smoking. However, 42% of the smokers participating had never. On the other hand, about 25% of the respondents were willing to consider quitting smoking in the future. Modeling of cigarette smoking suggested that the most significant independent predictors of smoking behavior were geographic area, gender, marital status, education, job and age. Considerable variation in smoking prevalence was noted related with participant sociodemographics. Findings recommend the necessity for control and intervention programs in Saudi community.
Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.
2010-01-01
Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998
Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution
2013-01-01
Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS
Saethre, Eirik; Stadler, Jonathan
2013-03-01
As clinical trial research increasingly permeates sub-Saharan Africa, tales of purposeful HIV infection, blood theft, and other harmful outcomes are widely reported by participants and community members. Examining responses to the Microbicide Development Programme 301-a randomized, double-blind, placebo-controlled microbicide trial-we investigate the ways in which these accounts embed medical research within postcolonial contexts. We explore three popular narratives circulating around the Johannesburg trial site: malicious whites killing participants and selling their blood, greedy women enrolling in the trial solely for financial gain, and virtuous volunteers attempting to ensure their health and aid others through trial participation. We argue that trial participants and community members transform medical research into a meaningful tool that alternately affirms, debates, and challenges contemporary social relations. PMID:23674325
Kalidindi, Kiran; Bowman, Howard
2007-08-01
An important component of decision making is evaluating the expected result of a choice, using past experience. The way past experience is used to predict future rewards and punishments can have profound effects on decision making. The aim of this study is to further understand the possible role played by the ventromedial prefrontal cortex in decision making, using results from the Iowa Gambling Task (IGT). A number of theories in the literature offer potential explanations for the underlying cause of the deficit(s) found in bilateral ventromedial prefrontal lesion (VMF) patients on the IGT. An error-driven epsilon-greedy reinforcement learning method was found to produce a good match to both human normative and VMF patient groups from a number of studies. The model supports the theory that the VMF patients are less strategic (more explorative), which could be due to a working memory deficit, and are more reactive than healthy controls. This last aspect seems consistent with a 'myopia' for future consequences.
Zhu, Chuan; Zhang, Sai; Han, Guangjie; Jiang, Jinfang; Rodrigues, Joel J. P. C.
2016-01-01
Mobile sink is widely used for data collection in wireless sensor networks. It can avoid ‘hot spot’ problems but energy consumption caused by multihop transmission is still inefficient in real-time application scenarios. In this paper, a greedy scanning data collection strategy (GSDCS) is proposed, and we focus on how to reduce routing energy consumption by shortening total length of routing paths. We propose that the mobile sink adjusts its trajectory dynamically according to the changes of network, instead of predetermined trajectory or random walk. Next, the mobile sink determines which area has more source nodes, then it moves toward this area. The benefit of GSDCS is that most source nodes are no longer needed to upload sensory data for long distances. Especially in event-driven application scenarios, when event area changes, the mobile sink could arrive at the new event area where most source nodes are located currently. Hence energy can be saved. Analytical and simulation results show that compared with existing work, our GSDCS has a better performance in specific application scenarios. PMID:27608022
Zhu, Chuan; Zhang, Sai; Han, Guangjie; Jiang, Jinfang; Rodrigues, Joel J P C
2016-01-01
Mobile sink is widely used for data collection in wireless sensor networks. It can avoid 'hot spot' problems but energy consumption caused by multihop transmission is still inefficient in real-time application scenarios. In this paper, a greedy scanning data collection strategy (GSDCS) is proposed, and we focus on how to reduce routing energy consumption by shortening total length of routing paths. We propose that the mobile sink adjusts its trajectory dynamically according to the changes of network, instead of predetermined trajectory or random walk. Next, the mobile sink determines which area has more source nodes, then it moves toward this area. The benefit of GSDCS is that most source nodes are no longer needed to upload sensory data for long distances. Especially in event-driven application scenarios, when event area changes, the mobile sink could arrive at the new event area where most source nodes are located currently. Hence energy can be saved. Analytical and simulation results show that compared with existing work, our GSDCS has a better performance in specific application scenarios. PMID:27608022
NASA Astrophysics Data System (ADS)
Kosiel, K.; Kubacka-Traczyk, J.; Sankowska, I.; Szerling, A.; Gutowski, P.; Bugajski, M.
2012-09-01
In order to adjust the highly controllable and optimum growth conditions, the multi-step interrupted-growth MBE processes were performed to deposit a series of GaAs/Al0.45Ga0.55As QCL structures. The additional calibrations of MBE system were carried out during the designed growth interruptions. This solution was combined with a relatively low growth rate of active region layers, in order to suppress the negative effects of elemental flux instabilities. As a result, the fabricated QCL structures have yielded devices operating with peak optical power of ˜12 mW at room temperature. That is a better result than was obtained for comparable structures deposited with a growth rate kept constant, and with the only initial calibrations performed just before the epitaxy of the overall structure.
Li, Cong; Li, Hui; Sun, Jin; Zhang, XinYue; Shi, Jinsong; Xu, Zhenghong
2016-08-01
Hydroxylation of dehydroepiandrosterone (DHEA) to 3β,7α,15α-trihydroxy-5-androstene-17-one (7α,15α-diOH-DHEA) by Colletotrichum lini ST-1 is an essential step in the synthesis of many steroidal drugs, while low DHEA concentration and 7α,15α-diOH-DHEA production are tough problems to be solved urgently in industry. In this study, the significant improvement of 7α,15α-diOH-DHEA yield in 5-L stirred fermenter with 15 g/L DHEA was achieved. To maintain a sufficient quantity of glucose for the bioconversion, glucose of 15 g/L was fed at 18 h, the 7α,15α-diOH-DHEA yield and dry cell weight were increased by 17.7 and 30.9 %, respectively. Moreover, multi-step DHEA addition strategy was established to diminish DHEA toxicity to C. lini, and the 7α,15α-diOH-DHEA yield raised to 53.0 %. Further, a novel strategy integrating glucose-feeding with multi-step addition of DHEA was carried out and the product yield increased to 66.6 %, which was the highest reported 7α,15α-diOH-DHEA production in 5-L stirred fermenter. Meanwhile, the conversion course was shortened to 44 h. This strategy would provide a possible way in enhancing the 7α,15α-diOH-DHEA yield in pharmaceutical industry.
An Improved Particle Swarm Optimization for Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Liu, Xinmei; Su, Jinrong; Han, Yan
In allusion to particle swarm optimization being prone to get into local minimum, an improved particle swarm optimization algorithm is proposed. The algorithm draws on the thinking of the greedy algorithm to initialize the particle swarm. Two swarms are used to optimize synchronously. Crossover and mutation operators in genetic algorithm are introduced into the new algorithm to realize the sharing of information among swarms. We test the algorithm with Traveling Salesman Problem with 14 nodes and 30 nodes. The result shows that the algorithm can break away from local minimum earlier and it has high convergence speed and convergence ratio.
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-01-01
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle’s position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-01-01
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-01-01
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption.
Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.
2010-08-18
The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.
Multi-step contrast sensitivity gauge
Quintana, Enrico C; Thompson, Kyle R; Moore, David G; Heister, Jack D; Poland, Richard W; Ellegood, John P; Hodges, George K; Prindville, James E
2014-10-14
An X-ray contrast sensitivity gauge is described herein. The contrast sensitivity gauge comprises a plurality of steps of varying thicknesses. Each step in the gauge includes a plurality of recesses of differing depths, wherein the depths are a function of the thickness of their respective step. An X-ray image of the gauge is analyzed to determine a contrast-to-noise ratio of a detector employed to generate the image.
Smusz, Sabina; Mordalski, Stefan; Witek, Jagna; Rataj, Krzysztof; Kafel, Rafał; Bojarski, Andrzej J
2015-04-27
Molecular docking, despite its undeniable usefulness in computer-aided drug design protocols and the increasing sophistication of tools used in the prediction of ligand-protein interaction energies, is still connected with a problem of effective results analysis. In this study, a novel protocol for the automatic evaluation of numerous docking results is presented, being a combination of Structural Interaction Fingerprints and Spectrophores descriptors, machine-learning techniques, and multi-step results analysis. Such an approach takes into consideration the performance of a particular learning algorithm (five machine learning methods were applied), the performance of the docking algorithm itself, the variety of conformations returned from the docking experiment, and the receptor structure (homology models were constructed on five different templates). Evaluation using compounds active toward 5-HT6 and 5-HT7 receptors, as well as additional analysis carried out for beta-2 adrenergic receptor ligands, proved that the methodology is a viable tool for supporting virtual screening protocols, enabling proper discrimination between active and inactive compounds.
Reitz, Rolf D.; Choi, Dae; Liu, Yi.; RempleEwert, Bret H.; Foster, David.; Miles, Paul; Tao, Feng
2005-01-01
Low-temperature combustion concepts that utilize cooled EGR, early/retarded injection, high swirl ratios, and modest compression ratios have recently received considerable attention. To understand the combustion and, in particular, the soot formation process under these operating conditions, a modeling study was carried out using the KIVA-3V code with an improved phenomenological soot model. This multi-step soot model includes particle inception, surface growth, surface oxidation, and particle coagulation. Additional models include a piston-ring crevice model, the KH/RT spray breakup model, a droplet wall impingement model, a wall heat transfer model, and the RNG k-{var_epsilon} turbulence model. The Shell model was used to simulate the ignition process, and a laminar-and-turbulent characteristic time combustion model was used for the post-ignition combustion process. A low-load (IMEP=3 bar) operating condition was considered and the predicted in-cylinder pressures and heat release rates were compared with measurements. Predicted soot mass, soot particle size, soot number density distributions and other relevant quantities are presented and discussed. The effects of variable EGR rate (0-68%), injection pressure (600-1200 bar), and injection timing were studied. The predictions demonstrate that both EGR and retarded injection are beneficial for reducing NO{sub x} emissions, although the former has a more pronounced effect. Additionally, higher soot emissions are typically predicted for the higher EGR rates. However, when the EGR rate exceeds a critical value (over 65% in this study), the soot emissions decrease. Reduced soot emissions are also predicted when higher injection pressures or retarded injection timings are employed. The reduction in soot with retarded injection is less than what is observed experimentally, however.
Optimal interdiction of unreactive Markovian evaders
Hagberg, Aric; Pan, Feng; Gutfraind, Alex
2009-01-01
The interdiction problem arises in a variety of areas including military logistics, infectious disease control, and counter-terrorism. In the typical formulation of network interdiction. the task of the interdictor is to find a set of edges in a weighted network such that the removal of those edges would increase the cost to an evader of traveling on a path through the network. Our work is motivated by cases in which the evader has incomplete information about the network or lacks planning time or computational power, e.g. when authorities set up roadblocks to catch bank robbers, the criminals do not know all the roadblock locations or the best path to use for their escape. We introduce a model of network interdiction in which the motion of one or more evaders is described by Markov processes on a network and the evaders are assumed not to react to interdiction decisions. The interdiction objective is to find a node or set. of size at most B, that maximizes the probability of capturing the evaders. We prove that similar to the classical formulation this interdiction problem is NP-hard. But unlike the classical problem our interdiction problem is submodular and the optimal solution can be approximated within 1-lie using a greedy algorithm. Additionally. we exploit submodularity to introduce a priority evaluation strategy that speeds up the greedy algorithm by orders of magnitude. Taken together the results bring closer the goal of finding realistic solutions to the interdiction problem on global-scale networks.
Ant system: optimization by a colony of cooperating agents.
Dorigo, M; Maniezzo, V; Colorni, A
1996-01-01
An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.
Image-driven mesh optimization
Lindstrom, P; Turk, G
2001-01-05
We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
NASA Astrophysics Data System (ADS)
Zhu, Li-Ping; Yao, Yan; Zhou, Shi-Dong; Dong, Shi-Wei
2007-12-01
A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT) systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL) test loops show that the proposed algorithm is efficient for practical DMT transmissions.
A variable multi-step method for transient heat conduction
NASA Technical Reports Server (NTRS)
Smolinski, Patrick
1991-01-01
A variable explicit time integration algorithm is developed for unsteady diffusion problems. The algorithm uses nodal partitioning and allows the nodal groups to be updated with different time steps. The stability of the algorithm is analyzed using energy methods and critical time steps are found in terms of element eigenvalues with no restrictions on element types. Several numerical examples are given to illustrate the accuracy of the method.
Information processing in multi-step signaling pathways
NASA Astrophysics Data System (ADS)
Ganesan, Ambhi; Hamidzadeh, Archer; Zhang, Jin; Levchenko, Andre
Information processing in complex signaling networks is limited by a high degree of variability in the abundance and activity of biochemical reactions (biological noise) operating in living cells. In this context, it is particularly surprising that many signaling pathways found in eukaryotic cells are composed of long chains of biochemical reactions, which are expected to be subject to accumulating noise and delayed signal processing. Here, we challenge the notion that signaling pathways are insulated chains, and rather view them as parts of extensively branched networks, which can benefit from a low degree of interference between signaling components. We further establish conditions under which this pathway organization would limit noise accumulation, and provide evidence for this type of signal processing in an experimental model of a calcium-activated MAPK cascade. These results address the long-standing problem of diverse organization and structure of signaling networks in live cells.
Multi-step dielectrophoresis for separation of particles.
Aldaeus, Fredrik; Lin, Yuan; Amberg, Gustav; Roeraade, Johan
2006-10-27
A new concept for separation of particles based on repetitive dielectrophoretic trapping and release in a flow system is proposed. Calculations using the finite element method have been performed to envision the particle behavior and the separation effectiveness of the proposed method. As a model system, polystyrene beads in deionized water and a micro-flow channel with arrays of interdigited electrodes have been used. Results show that the resolution increases as a direct function of the number of trap-and-release steps, and that a difference in size will have a larger influence on the separation than a difference in other dielectrophoretic properties. About 200 trap-and-release steps would be required to separate particles with a size difference of 0.2%. The enhanced separation power of dielectrophoresis with multiple steps could be of great importance, not only for fractionation of particles with small differences in size, but also for measuring changes in surface conductivity, or for separations based on combinations of difference in size and dielectric properties.
An Automated, Multi-Step Monte Carlo Burnup Code System.
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2,more » the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.« less
An Automated, Multi-Step Monte Carlo Burnup Code System.
TRELLUE, HOLLY R.
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2, the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.
Improving IMRT-plan quality with MLC leaf position refinement post plan optimization
Niu Ying; Zhang Guowei; Berman, Barry L.; Parke, William C.; Yi Byongyong; Yu, Cedric X.
2012-08-15
Purpose: In intensity-modulated radiation therapy (IMRT) planning, reducing the pencil-beam size may lead to a significant improvement in dose conformity, but also increase the time needed for the dose calculation and plan optimization. The authors develop and evaluate a postoptimization refinement (POpR) method, which makes fine adjustments to the multileaf collimator (MLC) leaf positions after plan optimization, enhancing the spatial precision and improving the plan quality without a significant impact on the computational burden. Methods: The authors' POpR method is implemented using a commercial treatment planning system based on direct aperture optimization. After an IMRT plan is optimized using pencil beams with regular pencil-beam step size, a greedy search is conducted by looping through all of the involved MLC leaves to see if moving the MLC leaf in or out by half of a pencil-beam step size will improve the objective function value. The half-sized pencil beams, which are used for updating dose distribution in the greedy search, are derived from the existing full-sized pencil beams without need for further pencil-beam dose calculations. A benchmark phantom case and a head-and-neck (HN) case are studied for testing the authors' POpR method. Results: Using a benchmark phantom and a HN case, the authors have verified that their POpR method can be an efficient technique in the IMRT planning process. Effectiveness of POpR is confirmed by noting significant improvements in objective function values. Dosimetric benefits of POpR are comparable to those of using a finer pencil-beam size from the optimization start, but with far less computation and time. Conclusions: The POpR is a feasible and practical method to significantly improve IMRT-plan quality without compromising the planning efficiency.
Improving IMRT-plan quality with MLC leaf position refinement post plan optimization
Niu, Ying; Zhang, Guowei; Berman, Barry L.; Parke, William C.; Yi, Byongyong; Yu, Cedric X.
2012-01-01
Purpose: In intensity-modulated radiation therapy (IMRT) planning, reducing the pencil-beam size may lead to a significant improvement in dose conformity, but also increase the time needed for the dose calculation and plan optimization. The authors develop and evaluate a postoptimization refinement (POpR) method, which makes fine adjustments to the multileaf collimator (MLC) leaf positions after plan optimization, enhancing the spatial precision and improving the plan quality without a significant impact on the computational burden. Methods: The authors’ POpR method is implemented using a commercial treatment planning system based on direct aperture optimization. After an IMRT plan is optimized using pencil beams with regular pencil-beam step size, a greedy search is conducted by looping through all of the involved MLC leaves to see if moving the MLC leaf in or out by half of a pencil-beam step size will improve the objective function value. The half-sized pencil beams, which are used for updating dose distribution in the greedy search, are derived from the existing full-sized pencil beams without need for further pencil-beam dose calculations. A benchmark phantom case and a head-and-neck (HN) case are studied for testing the authors’ POpR method. Results: Using a benchmark phantom and a HN case, the authors have verified that their POpR method can be an efficient technique in the IMRT planning process. Effectiveness of POpR is confirmed by noting significant improvements in objective function values. Dosimetric benefits of POpR are comparable to those of using a finer pencil-beam size from the optimization start, but with far less computation and time. Conclusions: The POpR is a feasible and practical method to significantly improve IMRT-plan quality without compromising the planning efficiency. PMID:22894437
The optimization of the orbital Hohmann transfer
NASA Astrophysics Data System (ADS)
El Mabsout, Badaoui; Kamel, Osman M.; Soliman, Adel S.
2009-10-01
There are four bi-impulsive distinct configurations for the generalized Hohmann orbit transfer. In this case the terminal orbits as well as the transfer orbit are elliptic and coplanar. The elements of the initial orbit a1, e1 and the semi-major axis a2 of the terminal orbit are uniquely given quantities. For optimization procedure, minimization is relevant to the independent parameter eT, the eccentricity of the transfer orbit. We are capable of the assignment of minimum rocket fuel expenditure by using ordinary calculus condition of minimization for |ΔVA|+|ΔVB|=S. We exposed in detail the multi-steps of the optimization procedure. We constructed the variation table of S(eT) which proved that S(eT) is a decreasing function of eT in the admissible interval [e,e]. Our analysis leads to the fact that e2=1 for eT=e, i.e. the final orbit is a parabolic trajectory.
A global optimization paradigm based on change of measures
Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan
2015-01-01
A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268
A mathematical programming approach to stochastic and dynamic optimization problems
Bertsimas, D.
1994-12-31
We propose three ideas for constructing optimal or near-optimal policies: (1) for systems for which we have an exact characterization of the performance space we outline an adaptive greedy algorithm that gives rise to indexing policies (we illustrate this technique in the context of indexable systems); (2) we use integer programming to construct policies from the underlying descriptions of the performance space (we illustrate this technique in the context of polling systems); (3) we use linear control over polyhedral regions to solve deterministic versions for this class of problems. This approach gives interesting insights for the structure of the optimal policy (we illustrate this idea in the context of multiclass queueing networks). The unifying theme in the paper is the thesis that better formulations lead to deeper understanding and better solution methods. Overall the proposed approach for stochastic and dynamic optimization parallels efforts of the mathematical programming community in the last fifteen years to develop sharper formulations (polyhedral combinatorics and more recently nonlinear relaxations) and leads to new insights ranging from a complete characterization and new algorithms for indexable systems to tight lower bounds and new algorithms with provable a posteriori guarantees for their suboptimality for polling systems, multiclass queueing and loss networks.
Optimizing spread dynamics on graphs by message passing
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.
2013-09-01
Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).
Optimal stimulus scheduling for active estimation of evoked brain networks
NASA Astrophysics Data System (ADS)
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
NASA Astrophysics Data System (ADS)
Guthier, C. V.; Aschenbrenner, K. P.; Müller, R.; Polster, L.; Cormack, R. A.; Hesser, J. W.
2016-08-01
This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p < 0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures.
NASA Astrophysics Data System (ADS)
Guthier, C. V.; Aschenbrenner, K. P.; Müller, R.; Polster, L.; Cormack, R. A.; Hesser, J. W.
2016-08-01
This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56–350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p < 0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures.
Guthier, C V; Aschenbrenner, K P; Müller, R; Polster, L; Cormack, R A; Hesser, J W
2016-08-21
This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p < 0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures. PMID:27435044
A nested partitions framework for beam angle optimization in intensity-modulated radiation therapy.
D'Souza, Warren D; Zhang, Hao H; Nazareth, Daryl P; Shi, Leyuan; Meyer, Robert R
2008-06-21
Coupling beam angle optimization with dose optimization in intensity-modulated radiation therapy (IMRT) increases the size and complexity of an already large-scale combinatorial optimization problem. We have developed a novel algorithm, nested partitions (NP), that is capable of finding suitable beam angle sets by guiding the dose optimization process. NP is a metaheuristic that is flexible enough to guide the search of a heuristic or deterministic dose optimization algorithm. The NP method adaptively samples from the entire feasible region, or search space, and coordinates the sampling effort with a systematic partitioning of the feasible region at successive iterations, concentrating the search in promising subsets. We used a 'warm-start' approach by initiating NP with beam angle samples derived from an integer programming (IP) model. In this study, we describe our implementation of the NP framework with a commercial optimization algorithm. We compared the NP framework with equi-spaced beam angle selection, the IP method, greedy heuristic and random sampling heuristic methods. The results of the NP approach were evaluated using two clinical cases (head and neck and whole pelvis) involving the primary tumor and nodal volumes. Our results show that NP produces better quality solutions than the alternative considered methods. PMID:18523351
A nested partitions framework for beam angle optimization in intensity-modulated radiation therapy
NASA Astrophysics Data System (ADS)
D'Souza, Warren D.; Zhang, Hao H.; Nazareth, Daryl P.; Shi, Leyuan; Meyer, Robert R.
2008-06-01
Coupling beam angle optimization with dose optimization in intensity-modulated radiation therapy (IMRT) increases the size and complexity of an already large-scale combinatorial optimization problem. We have developed a novel algorithm, nested partitions (NP), that is capable of finding suitable beam angle sets by guiding the dose optimization process. NP is a metaheuristic that is flexible enough to guide the search of a heuristic or deterministic dose optimization algorithm. The NP method adaptively samples from the entire feasible region, or search space, and coordinates the sampling effort with a systematic partitioning of the feasible region at successive iterations, concentrating the search in promising subsets. We used a 'warm-start' approach by initiating NP with beam angle samples derived from an integer programming (IP) model. In this study, we describe our implementation of the NP framework with a commercial optimization algorithm. We compared the NP framework with equi-spaced beam angle selection, the IP method, greedy heuristic and random sampling heuristic methods. The results of the NP approach were evaluated using two clinical cases (head and neck and whole pelvis) involving the primary tumor and nodal volumes. Our results show that NP produces better quality solutions than the alternative considered methods.
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Multiple Object Tracking Using K-Shortest Paths Optimization.
Berclaz, Jérôme; Fleuret, François; Türetken, Engin; Fua, Pascal
2011-09-01
Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.
NASA Astrophysics Data System (ADS)
Shaltev, M.
2016-02-01
The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.
NASA Astrophysics Data System (ADS)
Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W.; Tsapatsis, Michael; Siepmann, J. Ilja
2015-01-01
Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.
Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W; Tsapatsis, Michael; Siepmann, J Ilja
2015-01-21
Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.
Hoffmann, Thomas J.; Zhan, Yiping; Kvale, Mark N.; Hesselson, Stephanie E.; Gollub, Jeremy; Iribarren, Carlos; Lu, Yontao; Mei, Gangwu; Purdy, Matthew M.; Quesenberry, Charles; Rowell, Sarah; Shapero, Michael H.; Smethurst, David; Somkin, Carol P.; Van den Eeden, Stephen K.; Walter, Larry; Webster, Teresa; Whitmer, Rachel A.; Finn, Andrea; Schaefer, Catherine; Kwok, Pui-Yan; Risch, Neil
2012-01-01
Four custom Axiom genotyping arrays were designed for a genome-wide association (GWA) study of 100,000 participants from the Kaiser Permanente Research Program on Genes, Environment and Health. The array optimized for individuals of European race/ethnicity was previously described. Here we detail the development of three additional microarrays optimized for individuals of East Asian, African American, and Latino race/ethnicity. For these arrays, we decreased redundancy of high-performing SNPs to increase SNP capacity. The East Asian array was designed using greedy pairwise SNP selection. However, removing SNPs from the target set based on imputation coverage is more efficient than pairwise tagging. Therefore, we developed a novel hybrid SNP selection method for the African American and Latino arrays utilizing rounds of greedy pairwise SNP selection, followed by removal from the target set of SNPs covered by imputation. The arrays provide excellent genome-wide coverage and are valuable additions for large-scale GWA studies. PMID:21903159
A Bayesian optimization approach for wind farm power maximization
NASA Astrophysics Data System (ADS)
Park, Jinkyoo; Law, Kincho H.
2015-03-01
The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.
Carver, Charles S; Scheier, Michael F
2014-06-01
Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism.
General optimization technique for high-quality community detection in complex networks
NASA Astrophysics Data System (ADS)
Sobolevsky, Stanislav; Campari, Riccardo; Belyi, Alexander; Ratti, Carlo
2014-07-01
Recent years have witnessed the development of a large body of algorithms for community detection in complex networks. Most of them are based upon the optimization of objective functions, among which modularity is the most common, though a number of alternatives have been suggested in the scientific literature. We present here an effective general search strategy for the optimization of various objective functions for community detection purposes. When applied to modularity, on both real-world and synthetic networks, our search strategy substantially outperforms the best existing algorithms in terms of final scores of the objective function. In terms of execution time for modularity optimization this approach also outperforms most of the alternatives present in literature with the exception of fastest but usually less efficient greedy algorithms. The networks of up to 30000 nodes can be analyzed in time spans ranging from minutes to a few hours on average workstations, making our approach readily applicable to tasks not limited by strict time constraints but requiring the quality of partitioning to be as high as possible. Some examples are presented in order to demonstrate how this quality could be affected by even relatively small changes in the modularity score stressing the importance of optimization accuracy.
Selecting training inputs via greedy rank covering
Buchsbaum, A.L.; Santen, J.P.H. van
1996-12-31
We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
Practical optimization of Steiner trees via the cavity method
NASA Astrophysics Data System (ADS)
Braunstein, Alfredo; Muntoni, Anna
2016-07-01
The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Guimarães, Dayan Adionel; Sakai, Lucas Jun; Alberti, Antonio Marcos; de Souza, Rausley Adriano Amaral
2016-01-01
In this paper, a simple and flexible method for increasing the lifetime of fixed or mobile wireless sensor networks is proposed. Based on past residual energy information reported by the sensor nodes, the sink node or another central node dynamically optimizes the communication activity levels of the sensor nodes to save energy without sacrificing the data throughput. The activity levels are defined to represent portions of time or time-frequency slots in a frame, during which the sensor nodes are scheduled to communicate with the sink node to report sensory measurements. Besides node mobility, it is considered that sensors’ batteries may be recharged via a wireless power transmission or equivalent energy harvesting scheme, bringing to the optimization problem an even more dynamic character. We report large increased lifetimes over the non-optimized network and comparable or even larger lifetime improvements with respect to an idealized greedy algorithm that uses both the real-time channel state and the residual energy information. PMID:27657075
Wen-Chiao Lin; Humberto E. Garcia; Tae-Sic Yoo
2011-06-01
Diagnosers for keeping track on the occurrences of special events in the framework of unreliable partially observed discrete-event dynamical systems were developed in previous work. This paper considers observation platforms consisting of sensors that provide partial and unreliable observations and of diagnosers that analyze them. Diagnosers in observation platforms typically perform better as sensors providing the observations become more costly or increase in number. This paper proposes a methodology for finding an observation platform that achieves an optimal balance between cost and performance, while satisfying given observability requirements and constraints. Since this problem is generally computational hard in the framework considered, an observation platform optimization algorithm is utilized that uses two greedy heuristics, one myopic and another based on projected performances. These heuristics are sequentially executed in order to find best observation platforms. The developed algorithm is then applied to an observation platform optimization problem for a multi-unit-operation system. Results show that improved observation platforms can be found that may significantly reduce the observation platform cost but still yield acceptable performance for correctly inferring the occurrences of special events.
Sejnowski, Terrence J.; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J.
2014-01-01
Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications. PMID:25328167
Lee, John R.
1975-01-01
Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041
Kreitler, Jason; Stoms, David M; Davis, Frank W
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Optimal space-time attacks on system state estimation under a sparsity constraint
NASA Astrophysics Data System (ADS)
Lu, Jingyang; Niu, Ruixin; Han, Puxiao
2016-05-01
System state estimation in the presence of an adversary that injects false information into sensor readings has attracted much attention in wide application areas, such as target tracking with compromised sensors, secure monitoring of dynamic electric power systems, secure driverless cars, and radar tracking and detection in the presence of jammers. From a malicious adversary's perspective, the optimal strategy for attacking a multi-sensor dynamic system over sensors and over time is investigated. It is assumed that the system defender can perfectly detect the attacks and identify and remove sensor data once they are corrupted by false information injected by the adversary. With this in mind, the adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under a sparse attack constraint such that the adversary can only attack the system a few times over time and over sensors. The sparsity assumption is due to the adversary's limited resources and his/her intention to reduce the chance of being detected by the system defender. This becomes an integer programming problem and its optimal solution, the exhaustive search, is intractable with a prohibitive complexity, especially for a system with a large number of sensors and over a large number of time steps. Several suboptimal solutions, such as those based on greedy search and dynamic programming are proposed to find the attack strategies. Examples and numerical results are provided in order to illustrate the effectiveness and the reduced computational complexities of the proposed attack strategies.
Adaptive tracking and compensation of laser spot based on ant colony optimization
NASA Astrophysics Data System (ADS)
Yang, Lihong; Ke, Xizheng; Bai, Runbing; Hu, Qidi
2009-05-01
Because the effect of atmospheric scattering and atmospheric turbulence on laser signal of atmospheric absorption,laser spot twinkling, beam drift and spot split-up occur ,when laser signal transmits in the atmospheric channel. The phenomenon will be seriously affects the stability and the reliability of laser spot receiving system. In order to reduce the influence of atmospheric turbulence, we adopt optimum control thoughts in the field of artificial intelligence, propose a novel adaptive optical control technology-- model-free optimized adaptive control technology, analyze low-order pattern wave-front error theory, in which an -adaptive optical system is employed to adjust errors, and design its adaptive structure system. Ant colony algorithm is the control core algorithm, which is characteristic of positive feedback, distributed computing and greedy heuristic search. . The ant colony algorithm optimization of adaptive optical phase compensation is simulated. Simulation result shows that, the algorithm can effectively control laser energy distribution, improve laser light beam quality, and enhance signal-to-noise ratio of received signal.
NASA Astrophysics Data System (ADS)
Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.
1998-06-01
A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.
NASA Technical Reports Server (NTRS)
Patterson, Michael J.; Mohajeri, Kayhan
1991-01-01
The preliminary results of a test program to optimize a neutralizer design for 30 cm xenon ion thrusters are discussed. The impact of neutralizer geometry, neutralizer axial location, and local magnetic fields on neutralizer performance is discussed. The effect of neutralizer performance on overall thruster performance is quantified, for thruster operation in the 0.5-3.2 kW power range. Additionally, these data are compared to data published for other north-south stationkeeping (NSSK) and primary propulsion xenon ion thruster neutralizers.
[SIAM conference on optimization
Not Available
1992-05-10
Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.
AMMOS: Automated Molecular Mechanics Optimization tool for in silico Screening
Pencheva, Tania; Lagorce, David; Pajeva, Ilza; Villoutreix, Bruno O; Miteva, Maria A
2008-01-01
Background Virtual or in silico ligand screening combined with other computational methods is one of the most promising methods to search for new lead compounds, thereby greatly assisting the drug discovery process. Despite considerable progresses made in virtual screening methodologies, available computer programs do not easily address problems such as: structural optimization of compounds in a screening library, receptor flexibility/induced-fit, and accurate prediction of protein-ligand interactions. It has been shown that structural optimization of chemical compounds and that post-docking optimization in multi-step structure-based virtual screening approaches help to further improve the overall efficiency of the methods. To address some of these points, we developed the program AMMOS for refining both, the 3D structures of the small molecules present in chemical libraries and the predicted receptor-ligand complexes through allowing partial to full atom flexibility through molecular mechanics optimization. Results The program AMMOS carries out an automatic procedure that allows for the structural refinement of compound collections and energy minimization of protein-ligand complexes using the open source program AMMP. The performance of our package was evaluated by comparing the structures of small chemical entities minimized by AMMOS with those minimized with the Tripos and MMFF94s force fields. Next, AMMOS was used for full flexible minimization of protein-ligands complexes obtained from a mutli-step virtual screening. Enrichment studies of the selected pre-docked complexes containing 60% of the initially added inhibitors were carried out with or without final AMMOS minimization on two protein targets having different binding pocket properties. AMMOS was able to improve the enrichment after the pre-docking stage with 40 to 60% of the initially added active compounds found in the top 3% to 5% of the entire compound collection. Conclusion The open source AMMOS
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
NASA Astrophysics Data System (ADS)
Allahverdyan, Armen E.; Hovhannisyan, Karen; Mahler, Guenter
2010-05-01
We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures Th and Tc , respectively (θ≡Tc/Th<1) . The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by ζCA=(1)/(1-θ)-1 (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency ζC=(1)/(1-θ)-1 . The lower bound is reached in the equilibrium limit θ→1 . The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for lnn≫1 . If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζCA and converges to it for n≫1 .
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Optimal algorithms for haplotype assembly from whole-genome sequence data
He, Dan; Choi, Arthur; Pipatsrisawat, Knot; Darwiche, Adnan; Eskin, Eleazar
2010-01-01
Motivation: Haplotype inference is an important step for many types of analyses of genetic variation in the human genome. Traditional approaches for obtaining haplotypes involve collecting genotype information from a population of individuals and then applying a haplotype inference algorithm. The development of high-throughput sequencing technologies allows for an alternative strategy to obtain haplotypes by combining sequence fragments. The problem of ‘haplotype assembly’ is the problem of assembling the two haplotypes for a chromosome given the collection of such fragments, or reads, and their locations in the haplotypes, which are pre-determined by mapping the reads to a reference genome. Errors in reads significantly increase the difficulty of the problem and it has been shown that the problem is NP-hard even for reads of length 2. Existing greedy and stochastic algorithms are not guaranteed to find the optimal solutions for the haplotype assembly problem. Results: In this article, we proposed a dynamic programming algorithm that is able to assemble the haplotypes optimally with time complexity O(m × 2k × n), where m is the number of reads, k is the length of the longest read and n is the total number of SNPs in the haplotypes. We also reduce the haplotype assembly problem into the maximum satisfiability problem that can often be solved optimally even when k is large. Taking advantage of the efficiency of our algorithm, we perform simulation experiments demonstrating that the assembly of haplotypes using reads of length typical of the current sequencing technologies is not practical. However, we demonstrate that the combination of this approach and the traditional haplotype phasing approaches allow us to practically construct haplotypes containing both common and rare variants. Contact: danhe@cs.ucla.edu PMID:20529904
Modeling the Auto-Ignition of Biodiesel Blends with a Multi-Step Model
Toulson, Dr. Elisa; Allen, Casey M; Miller, Dennis J; McFarlane, Joanna; Schock, Harold; Lee, Tonghun
2011-01-01
There is growing interest in using biodiesel in place of or in blends with petrodiesel in diesel engines; however, biodiesel oxidation chemistry is complicated to directly model and existing surrogate kinetic models are very large, making them computationally expensive. The present study describes a method for predicting the ignition behavior of blends of n-heptane and methyl butanoate, fuels whose blends have been used in the past as a surrogate for biodiesel. The autoignition is predicted using a multistep (8-step) model in order to reduce computational time and make this a viable tool for implementation into engine simulation codes. A detailed reaction mechanism for n-heptane-methyl butanoate blends was used as a basis for validating the multistep model results. The ignition delay trends predicted by the multistep model for the n-heptane-methyl butanoate blends matched well with that of the detailed CHEMKIN model for the majority of conditions tested.
Simulation of multi-steps thermal transition in 2D spin-crossover nanoparticles
NASA Astrophysics Data System (ADS)
Jureschi, Catalin-Maricel; Pottier, Benjamin-Louis; Linares, Jorge; Richard Dahoo, Pierre; Alayli, Yasser; Rotaru, Aurelian
2016-04-01
We have used an Ising like model to study the thermal behavior of a 2D spin crossover (SCO) system embedded in a matrix. The interaction parameter between edge SCO molecules and its local environment was included in the standard Ising like model as an additional term. The influence of the system's size and the ratio between the number of edge molecules and the other molecules were also discussed.
Weiss Brennan, Claire V; Walck, Scott D; Swab, Jeffrey J
2014-12-01
A new technique for the preparation of heavily cracked, heavily damaged, brittle materials for examination in a transmission electron microscope (TEM) is described in detail. In this study, cross-sectional TEM samples were prepared from indented silicon carbide (SiC) bulk ceramics, although this technique could also be applied to other brittle and/or multiphase materials. During TEM sample preparation, milling-induced damage must be minimized, since in studying deformation mechanisms, it would be difficult to distinguish deformation-induced cracking from cracking occurring due to the sample preparation. The samples were prepared using a site-specific, two-step ion milling sequence accompanied by epoxy vacuum infiltration into the cracks. This technique allows the heavily cracked, brittle ceramic material to stay intact during sample preparation and also helps preserve the true microstructure of the cracked area underneath the indent. Some preliminary TEM results are given and discussed in regards to deformation studies in ceramic materials. This sample preparation technique could be applied to other cracked and/or heavily damaged materials, including geological materials, archaeological materials, fatigued materials, and corrosion samples.
A multi-step solvent-free mechanochemical route to indium(iii) complexes.
Wang, Jingyi; Ganguly, Rakesh; Yongxin, Li; Díaz, Jesus; Soo, Han Sen; García, Felipe
2016-05-10
Mechanochemistry is well-established in the solid-phase synthesis of inorganic materials but has rarely been employed for molecular syntheses. In recent years, there has been nascent interest in 'greener' synthetic methods with less solvent, higher yields, and shorter reaction times being especially appealing to the fine chemicals and inorganic catalyst industries. Herein, we demonstrate that main-group indium(iii) complexes featuring bis(imino)acenaphthene (BIAN) ligands are readily accessible through a mechanochemical milling approach. The synthetic methodology reported herein not only bypasses the use of large solvent quantities and transition metal reagents for ligand synthesis, but also reduces reaction times dramatically. These new main-group complexes exhibit the potential to be reduced to indium(i) compounds, which may be employed as photosensitizers in organic catalyses and functional materials. PMID:27112317
Multi-Step Attack Detection via Bayesian Modeling under Model Parameter Uncertainty
ERIC Educational Resources Information Center
Cole, Robert
2013-01-01
Organizations in all sectors of business have become highly dependent upon information systems for the conduct of business operations. Of necessity, these information systems are designed with many points of ingress, points of exposure that can be leveraged by a motivated attacker seeking to compromise the confidentiality, integrity or…
A multi-step solvent-free mechanochemical route to indium(iii) complexes.
Wang, Jingyi; Ganguly, Rakesh; Yongxin, Li; Díaz, Jesus; Soo, Han Sen; García, Felipe
2016-05-10
Mechanochemistry is well-established in the solid-phase synthesis of inorganic materials but has rarely been employed for molecular syntheses. In recent years, there has been nascent interest in 'greener' synthetic methods with less solvent, higher yields, and shorter reaction times being especially appealing to the fine chemicals and inorganic catalyst industries. Herein, we demonstrate that main-group indium(iii) complexes featuring bis(imino)acenaphthene (BIAN) ligands are readily accessible through a mechanochemical milling approach. The synthetic methodology reported herein not only bypasses the use of large solvent quantities and transition metal reagents for ligand synthesis, but also reduces reaction times dramatically. These new main-group complexes exhibit the potential to be reduced to indium(i) compounds, which may be employed as photosensitizers in organic catalyses and functional materials.
Multi-step process control and characterization of scanning probe lithography
NASA Astrophysics Data System (ADS)
Peterson, C. A.; Ruskell, T. G.; Pyle, J. L.; Workman, R. K.; Yao, X.; Hunt, J. P.; Sarid, D.; Parks, H. G.; Vermeire, B.
An atomic force microscope with a conducting tip (CT-AFM) was used to fabricate and characterize nanometer scale lines of (1) silicon oxide and (2) silicon nitride on H-terminated n-type silicon (100) wafers. In process (1), a negative bias was applied to the tip of the CT-AFM system and the resulting electric field caused electrolysis of ambient water vapor and local oxidation of the silicon surface. In addition, the accompanying current was detected by a sub-pA current amplifier. In process (2), the presence of a nitrogen atmosphere containing a small partial pressure of ammonia resulted in the local nitridation of the surface. The CT-AFM system was also used to locate and study the dielectric properties of the silicon-oxide lines as well as copper islands buried under 20 nm of silicon dioxide. A computer-controlled feedback system and raster scanning of the sample produced simultaneous topographic and Fowler-Nordheim tunneling maps of the structures under study. Detailed aspects of nanolithography and local-probe Fowler-Nordheim characterization using a CT-AFM will be discussed.
The Synthesis of 2-acetyl-1,4-naphthoquinone: A Multi-step Synthesis.
ERIC Educational Resources Information Center
Green, Ivan R.
1982-01-01
Outlines 2 procedures for synthesizing 2-acetyl-1,4-naphthoquinone to compare relative merits of the two pathways. The major objective of the exercise is to demonstrate that certain factors should be considered when selecting a pathway for synthesis including availability of starting materials, cost of reagents, number of steps involved,…
Multi-step shot noise spectrum induced by a local large spin
NASA Astrophysics Data System (ADS)
Niu, Peng-Bin; Shi, Yun-Long; Sun, Zhu; Nie, Yi-Hang
2015-12-01
We use non-equilibrium Green’s function method to analyze the shot noise spectrum of artificial single molecular magnets (ASMM) model in the strong spin-orbit coupling limit in sequential tunneling regime, mainly focusing on the effects of local large spin. In the linear response regime, the shot noise shows 2S + 1 peaks and is strongly spin-dependent. In the nonlinear response regime, one can observe 2S + 1 steps in shot noise and Fano factor. In these steps one can see the significant enhancement effect due to the spin-dependent multi-channel process of local large spin, which reduces electron correlations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11504210, 11504211, 11504212, 11274207, 11274208, 11174115, and 11325417), the Key Program of the Ministry of Education of China (Grant No. 212018), the Scientific and Technological Project of Shanxi Province, China (Grant No. 2015031002-2), the Natural Science Foundation of Shanxi Province, China (Grant Nos. 2013011007-2 and 2013021010-5), and the Outstanding Innovative Teams of Higher Learning Institutions of Shanxi Province, China.
Study on ann-based multi-step prediction model of short-term climatic variation
NASA Astrophysics Data System (ADS)
Jin, Long; Ju, Weimin; Miao, Qilong
2000-03-01
In the context of 1905 1995 series from Nanjing and Hangzhou, study is undertaken of estab-lishing a predictive model of annual mean temperature in 1996 2005 to come over the Changjiang (Yangtze River) delta region through mean generating function and artificial neural network in combination. Results show that the established model yields mean error of 0.45°C for their abso-lute values of annual mean temperature from 10 yearly independent samples (1986 1995) and the difference between the mean predictions and related measurements is 0.156°C. The developed model is found superior to a mean generating function regression model both in historical data fit-ting and independent sample prediction.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.
SOPRA: Scaffolding algorithm for paired reads via statistical optimization
2010-01-01
Background High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. Results We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various
NASA Astrophysics Data System (ADS)
Marec, J. P.
The optimization of rendezvous and transfer orbits is introduced. Optimal transfer is defined and propulsion system modeling is outlined. Parameter optimization, including the Hohmann transfer, is discussed. Optimal transfer in general, uniform, and central gravitational fields is covered. Interplanetary rendezvous is treated.
NASA Astrophysics Data System (ADS)
Siepmann, J. Ilja; Bai, Peng; Tsapatsis, Michael; Knight, Chris; Deem, Michael W.
2015-03-01
Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure and the type or location of active sites. To date, 213 framework types have been synthesized and >330000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol beyond the ethanol/water azeotropic concentration in a single separation step from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modeling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds. Financial support from the Department of Energy Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences under Award DE-FG02-12ER16362 is gratefully acknowledged.
Optimization of composite structures
NASA Technical Reports Server (NTRS)
Stroud, W. J.
1982-01-01
Structural optimization is introduced and examples which illustrate potential problems associated with optimized structures are presented. Optimized structures may have very low load carrying ability for an off design condition. They tend to have multiple modes of failure occurring simultaneously and can, therefore, be sensitive to imperfections. Because composite materials provide more design variables than do metals, they allow for more refined tailoring and more extensive optimization. As a result, optimized composite structures can be especially susceptible to these problems.
NASA Astrophysics Data System (ADS)
Sima, Aleksandra Anna; Bonaventura, Xavier; Feixas, Miquel; Sbert, Mateu; Howell, John Anthony; Viola, Ivan; Buckley, Simon John
2013-03-01
Photorealistic 3D models are used for visualization, interpretation and spatial measurement in many disciplines, such as cultural heritage, archaeology and geoscience. Using modern image- and laser-based 3D modelling techniques, it is normal to acquire more data than is finally used for 3D model texturing, as images may be acquired from multiple positions, with large overlap, or with different cameras and lenses. Such redundant image sets require sorting to restrict the number of images, increasing the processing efficiency and realism of models. However, selection of image subsets optimized for texturing purposes is an example of complex spatial analysis. Manual selection may be challenging and time-consuming, especially for models of rugose topography, where the user must account for occlusions and ensure coverage of all relevant model triangles. To address this, this paper presents a framework for computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models. The framework was created to offer algorithms for candidate image subset selection, whilst supporting refinement of subsets in an intuitive and visual manner. Automatic image sorting was implemented using algorithms originating in computer science and information theory, and variants of these were compared using multiple 3D models and covering image sets, collected for geological applications. The image subsets provided by the automatic procedures were compared to manually selected sets and their suitability for 3D model texturing was assessed. Results indicate that the automatic sorting algorithms are a promising alternative to manual methods. An algorithm based on a greedy solution to the weighted set-cover problem provided image sets closest to the quality and size of the manually selected sets. The improved automation and more reliable quality indicators make the photorealistic model creation workflow more accessible for application experts
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
Ridzal, Danis
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less
Investigations into the Optimization of Multi-Source Strength Brachytherapy Treatment Procedures
D. L. Henderson; S. Yoo; B.R. Thomadsen
2002-09-30
The goal of this project is to investigate the use of multi-strength and multi-specie radioactive sources in permanent prostate implant brachytherapy. In order to fulfill the requirement for an optimal dose distribution, the prescribed dose should be delivered to the target in a nearly uniform dose distribution while simultaneously sparing sensitive structures. The treatment plan should use a small number of needles and sources while satisfying the treatment requirements. The hypothesis for the use of multi-strength and/or multi-specie sources is that a better treatment plan using fewer sources and needles could be obtained than by treatment plans using single-strength sources could reduce the overall number of sources used for treatment. We employ a recently developed greedy algorithm based on the adjoint concept as the optimization search engine. The algorithm utilizes and ''adjoint ratio'', which provides a means of ranking source positions, as the pseudo-objective function. It ha s been shown that the greedy algorithm can solve the optimization problem efficiently and arrives at a clinically acceptable solution in less than 10 seconds. Our study was inclusive, that is there was no combination of sources that clearly stood out from the others and could therefore be considered the preferred set of sources for treatment planning. Source strengths of 0.2 mCi (low), 0.4 mCi (medium), and 0.6 mCi (high) of {sup 125}I in four different combinations were used for the multi-strength source study. The combination of high- and medium-strength sources achieved a more uniform target dose distribution due to few source implants whereas the combination of low-and medium-strength sources achieved better sparing of sensitive tissues including that of the single-strength 0.4 mCi base case. {sup 125}I at 0.4 mCi and {sup 192}Ir at 0.12 mCi and 0.25 mCi source strengths were used for the multi-specie source study. This study also proved inconclusive , Treatment plans using a
Multidisciplinary Optimization for Aerospace Using Genetic Optimization
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.
2007-01-01
In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.
NASA Astrophysics Data System (ADS)
Marec, J. P.
Techniques for the optimization (in terms of minimal mass loss) of spacecraft trajectories are developed. The optimal transfer is defined; a model of the propulsion system is presented; the two-impulse Hohmann transfer between coplanar circular orbits is shown to be the optimal trajectory for that case; and the problems of optimal transfer in general, uniform, and central gravitational fields are analyzed. A number of specific cases are examined and illustrated with diagrams and graphs.
McGuire-Snieckus, Rebecca
2014-04-01
Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to 'stop negative thought cycles' and to 'challenge unhelpful thoughts'. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and 'unhelpful' cognitions, does this minimise wider systemic social and economic influences on mental health?
Mikhalevich, V.S.; Sergienko, I.V.; Zadiraka, V.K.; Babich, M.D.
1994-11-01
This article examines some topics of optimization of computations, which have been discussed at 25 seminar-schools and symposia organized by the V.M. Glushkov Institute of Cybernetics of the Ukrainian Academy of Sciences since 1969. We describe the main directions in the development of computational mathematics and present some of our own results that reflect a certain design conception of speed-optimal and accuracy-optimal (or nearly optimal) algorithms for various classes of problems, as well as a certain approach to optimization of computer computations.
McGuire-Snieckus, Rebecca
2014-01-01
Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to ‘stop negative thought cycles’ and to ‘challenge unhelpful thoughts’. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and ‘unhelpful’ cognitions, does this minimise wider systemic social and economic influences on mental health? PMID:25237497
Optimization of parameterized lightpipes
NASA Astrophysics Data System (ADS)
Koshel, R. John
2007-01-01
Parameterization via the bend locus curve allows optimization of single-spherical-bend lightpipes. It takes into account the bend radii, the bend ratio, allowable volume, thickness, and other terms. Parameterization of the lightpipe allows the inclusion of a constrained optimizer to maximize performance of the lightpipe. The simplex method is used for optimization. The standard and optimal simplex methods are used to maximize the standard Lambertian transmission of the lightpipe. A second case presents analogous results when the ray-sample weighted, peak-to-average irradiance uniformity is included with the static Lambertian transmission. These results are compared to a study of the constrained merit space. Results show that both optimizers can locate the optimal solution, but the optimal simplex method accomplishes such with a reduced number of ray-trace evaluations.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Wheeler, Ward C
2003-08-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. PMID:14531408
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Analog neural nonderivative optimizers.
Teixeira, M M; Zak, S H
1998-01-01
Continuous-time neural networks for solving convex nonlinear unconstrained programming problems without using gradient information of the objective function are proposed and analyzed. Thus, the proposed networks are nonderivative optimizers. First, networks for optimizing objective functions of one variable are discussed. Then, an existing one-dimensional optimizer is analyzed, and a new line search optimizer is proposed. It is shown that the proposed optimizer network is robust in the sense that it has disturbance rejection property. The network can be implemented easily in hardware using standard circuit elements. The one-dimensional net is used as a building block in multidimensional networks for optimizing objective functions of several variables. The multidimensional nets implement a continuous version of the coordinate descent method.
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Homotopy optimization methods for global optimization.
Dunlavy, Daniel M.; O'Leary, Dianne P. (University of Maryland, College Park, MD)
2005-12-01
We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Structural optimization using optimality criteria methods
NASA Technical Reports Server (NTRS)
Khot, N. S.; Berke, L.
1984-01-01
Optimality criteria methods take advantage of some concepts as those of statically determinate or indeterminate structures, and certain variational principles of structural dynamics, to develop efficient algorithms for the sizing of structures that are subjected to stiffness-related constraints. Some of the methods and iterative strategies developed over the last decade for calculations of the Lagrange multipliers in stressand displacement-limited problems, as well as for satisfying the appropriate optimality criterion, are discussed. The application of these methods are illustrated by solving problems with stress and displacement constraints.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Smoothers for Optimization Problems
NASA Technical Reports Server (NTRS)
Arian, Eyal; Ta'asan, Shlomo
1996-01-01
We present a multigrid one-shot algorithm, and a smoothing analysis, for the numerical solution of optimal control problems which are governed by an elliptic PDE. The analysis provides a simple tool to determine a smoothing minimization process which is essential for multigrid application. Numerical results include optimal control of boundary data using different discretization schemes and an optimal shape design problem in 2D with Dirichlet boundary conditions.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Stenholm, Ake; Holmström, Sara; Hjärthag, Sandra; Lind, Ola
2012-01-01
Trace-level analysis of alkylphenol polyethoxylates (APEOs) in wastewater containing sludge requires the prior removal of contaminants and preconcentration. In this study, the effects on optimal work-up procedures of the types of alkylphenols present, their degree of ethoxylation, the biofilm wastewater treatment and the sample matrix were investigated for these purposes. The sampling spot for APEO-containing specimens from an industrial wastewater treatment plant was optimized, including a box that surrounded the tubing outlet carrying the wastewater, to prevent sedimented sludge contaminating the collected samples. Following these changes, the sampling precision (in terms of dry matter content) at a point just under the tubing leading from the biofilm reactors was 0.7% RSD. The findings were applied to develop a work-up procedure for use prior to a high-performance liquid chromatography-fluorescence detection analysis method capable of quantifying nonylphenol polyethoxylates (NPEOs) and poorly investigated dinonylphenol polyethoxylates (DNPEOs) at low microg L(-1) concentrations in effluents from non-activated sludge biofilm reactors. The selected multi-step work-up procedure includes lyophilization and pressurized fluid extraction (PFE) followed by strong ion exchange solid phase extraction (SPE). The yields of the combined procedure, according to tests with NP10EO-spiked effluent from a wastewater treatment plant, were in the 62-78% range. PMID:22519096
Optimizing qubit phase estimation
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François
2016-08-01
The theory of quantum state estimation is exploited here to investigate the most efficient strategies for this task, especially targeting a complete picture identifying optimal conditions in terms of Fisher information, quantum measurement, and associated estimator. The approach is specified to estimation of the phase of a qubit in a rotation around an arbitrary given axis, equivalent to estimating the phase of an arbitrary single-qubit quantum gate, both in noise-free and then in noisy conditions. In noise-free conditions, we establish the possibility of defining an optimal quantum probe, optimal quantum measurement, and optimal estimator together capable of achieving the ultimate best performance uniformly for any unknown phase. With arbitrary quantum noise, we show that in general the optimal solutions are phase dependent and require adaptive techniques for practical implementation. However, for the important case of the depolarizing noise, we again establish the possibility of a quantum probe, quantum measurement, and estimator uniformly optimal for any unknown phase. In this way, for qubit phase estimation, without and then with quantum noise, we characterize the phase-independent optimal solutions when they generally exist, and also identify the complementary conditions where the optimal solutions are phase dependent and only adaptively implementable.
Optimal Limited Contingency Planning
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Smith, David E.
2003-01-01
For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.
Algorithms for bilevel optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Optimal TCSC placement for optimal power flow
NASA Astrophysics Data System (ADS)
Lakdja, Fatiha; Zohra Gherbi, Fatima; Berber, Redouane; Boudjella, Houari
2012-11-01
Very few publications have been focused on the mathematical modeling of Flexible Alternating Current Transmission Systems (FACTS) -devices in optimal power flow analysis. A Thyristor Controlled Series Capacitors (TCSC) model has been proposed, and the model has been implemented in a successive QP. The mathematical models for TCSC have been established, and the Optimal Power Flow (OPF) problem with these FACTS-devices is solved by Newtons method. This article employs the Newton- based OPF-TCSC solver of MATLAB Simulator, thus it is essential to understand the development of OPF and the suitability of Newton-based algorithms for solving OPF-TCSC problem. The proposed concept was tested and validated with TCSC in twenty six-bus test system. Result shows that, when TCSC is used to relieve congestion in the system and the investment on TCSC can be recovered, with a new and original idea of integration.
Optimal control computer programs
NASA Technical Reports Server (NTRS)
Kuo, F.
1992-01-01
The solution of the optimal control problem, even with low order dynamical systems, can usually strain the analytical ability of most engineers. The understanding of this subject matter, therefore, would be greatly enhanced if a software package existed that could simulate simple generic problems. Surprisingly, despite a great abundance of commercially available control software, few, if any, address the part of optimal control in its most generic form. The purpose of this paper is, therefore, to present a simple computer program that will perform simulations of optimal control problems that arise from the first necessary condition and the Pontryagin's maximum principle.
Thermophotovoltaic Array Optimization
SBurger; E Brown; K Rahner; L Danielson; J Openlander; J Vell; D Siganporia
2004-07-29
A systematic approach to thermophotovoltaic (TPV) array design and fabrication was used to optimize the performance of a 192-cell TPV array. The systematic approach began with cell selection criteria that ranked cells and then matched cell characteristics to maximize power output. Following cell selection, optimization continued with an array packaging design and fabrication techniques that introduced negligible electrical interconnect resistance and minimal parasitic losses while maintaining original cell electrical performance. This paper describes the cell selection and packaging aspects of array optimization as applied to fabrication of a 192-cell array.
2014-05-13
ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used tomore » solve optimal design problems and inverse problems based on a variety of simulation software.« less
Contingency contractor optimization.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
NASA Astrophysics Data System (ADS)
Furniss, S. G.
1989-10-01
While an SSTO with airbreathing propulsion for initial acceleration may greatly reduce future payload launch costs, such vehicles exhibit extreme sensitivity to design assumptions; the process of vehicle optimization is, accordingly, a difficult one. Attention is presently given to the role in optimization of the design mission, fuselage geometry, and the means employed to furnish adequate pitch and directional control. The requirements influencing wing design and scaling are also discussed. The Saenger and Hotol designs are the illustrative cases noted in this generalizing consideration of the SSTO-optimization process.
Library for Nonlinear Optimization
2001-10-09
OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.
Alicia Hofler; Pavel Evtushenko
2007-07-03
Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation.
General shape optimization capability
NASA Technical Reports Server (NTRS)
Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson
1991-01-01
A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.
A. S. Hofler; P. Evtushenko; M. Krasilnikov
2007-08-01
Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. RF and SRF gun design is further complicated because the bunches are space charge dominated and require additional emittance compensation. A genetic algorithm has been successfully used to optimize DC photo injector designs for Cornell* and Jefferson Lab**, and we propose studying how the genetic algorithm techniques can be applied to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize gun designs that have been benchmarked with beam measurements and simulation.
Topology optimized microbioreactors.
Schäpper, Daniel; Lencastre Fernandes, Rita; Lantz, Anna Eliasson; Okkels, Fridolin; Bruus, Henrik; Gernaey, Krist V
2011-04-01
This article presents the fusion of two hitherto unrelated fields--microbioreactors and topology optimization. The basis for this study is a rectangular microbioreactor with homogeneously distributed immobilized brewers yeast cells (Saccharomyces cerevisiae) that produce a recombinant protein. Topology optimization is then used to change the spatial distribution of cells in the reactor in order to optimize for maximal product flow out of the reactor. This distribution accounts for potentially negative effects of, for example, by-product inhibition. We show that the theoretical improvement in productivity is at least fivefold compared with the homogeneous reactor. The improvements obtained by applying topology optimization are largest where either nutrition is scarce or inhibition effects are pronounced.
TOOLKIT FOR ADVANCED OPTIMIZATION
2000-10-13
The TAO project focuses on the development of software for large scale optimization problems. TAO uses an object-oriented design to create a flexible toolkit with strong emphasis on the reuse of external tools where appropriate. Our design enables bi-directional connection to lower level linear algebra support (for example, parallel sparse matrix data structures) as well as higher level application frameworks. The Toolkist for Advanced Optimization (TAO) is aimed at teh solution of large-scale optimization problemsmore » on high-performance architectures. Our main goals are portability, performance, scalable parallelism, and an interface independent of the architecture. TAO is suitable for both single-processor and massively-parallel architectures. The current version of TAO has algorithms for unconstrained and bound-constrained optimization.« less
Modeling using optimization routines
NASA Technical Reports Server (NTRS)
Thomas, Theodore
1995-01-01
Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.
Kawase, Mitsuhiro
2009-11-22
The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.
NASA Astrophysics Data System (ADS)
Wecker, Dave; Hastings, Matthew B.; Troyer, Matthias
2016-08-01
We study a variant of the quantum approximate optimization algorithm [E. Farhi, J. Goldstone, and S. Gutmann, arXiv:1411.4028] with a slightly different parametrization and a different objective: rather than looking for a state which approximately solves an optimization problem, our goal is to find a quantum algorithm that, given an instance of the maximum 2-satisfiability problem (MAX-2-SAT), will produce a state with high overlap with the optimal state. Using a machine learning approach, we chose a "training set" of instances and optimized the parameters to produce a large overlap for the training set. We then tested these optimized parameters on a larger instance set. As a training set, we used a subset of the hard instances studied by Crosson, Farhi, C. Y.-Y. Lin, H.-H. Lin, and P. Shor (CFLLS) (arXiv:1401.7320). When tested, on the full set, the parameters that we find produce a significantly larger overlap than the optimized annealing times of CFLLS. Testing on other random instances from 20 to 28 bits continues to show improvement over annealing, with the improvement being most notable on the hardest instances. Further tests on instances of MAX-3-SAT also showed improvement on the hardest instances. This algorithm may be a possible application for near-term quantum computers with limited coherence times.
Thermoacoustic Refrigerator's Stack Optimization
NASA Astrophysics Data System (ADS)
El-Fawal, Mawahib Hassan; Mohd-Ghazali, Normah; Yaacob, Mohd. Shafik; Darus, Amer Nordin
2010-06-01
The standing wave thermoacoustic refrigerator, which uses sound generation to transfer heat, was developed rapidly during the past four decades. It was regarded as a new, promising and environmentally benign alternative to conventional compression vapor refrigerators, although it was not competitive regarding the coefficient of performance (COP) yet. Thus the aim of this paper is to enhance thermoacoustic refrigerator's stack performance through optimization. A computational optimization procedure of thermoacoustic stack design was fully developed. The procedure was designed to achieve optimal coefficient of performance based on most of the design and operating parameters. Cooling load and acoustic power governing equations were set assuming the linear thermoacoustic theory. Lagrange multipliers method was used as an optimization technique tool to solve the governing equations. Numerical analyses results of the developed design procedure are presented. The results showed that the stack design parameters are the most significant parameters for the optimal overall performance. The coefficient of performance obtained increases by about 48.8% from the published experimental optimization methods. The results are in good agreement with past established studies.
Cyclone performance and optimization
Leith, D.
1990-09-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, an empirical model for predicting pressure drop across a cyclone was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of First (1950), Alexander (1949), Barth (1956), Stairmand (1949), and Shepherd-Lapple (1940). This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop and the dimension rations of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization curve is determined. The optimization results are used to develop a design procedure for optimized cyclones. 37 refs., 10 figs., 4 tabs.
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Optimization of Metronidazole Emulgel
Rao, Monica; Sukre, Girish; Aghav, Sheetal; Kumar, Manmeet
2013-01-01
The purpose of the present study was to develop and optimize the emulgel system for MTZ (Metronidazole), a poorly water soluble drug. The pseudoternary phase diagrams were developed for various microemulsion formulations composed of Capmul 908 P, Acconon MC8-2, and propylene glycol. The emulgel was optimized using a three-factor, two-level factorial design, the independent variables selected were Capmul 908 P, and surfactant mixture (Acconon MC8-2 and gelling agent), and the dependent variables (responses) were a cumulative amount of drug permeated across the dialysis membrane in 24 h (Y1) and spreadability (Y2). Mathematical equations and response surface plots were used to relate the dependent and independent variables. The regression equations were generated for responses Y1 and Y2. The statistical validity of the polynomials was established, and optimized formulation factors were selected. Validation of the optimization study with 3 confirmatory runs indicated a high degree of prognostic ability of response surface methodology. Emulgel system of MTZ was developed and optimized using 23 factorial design and could provide an effective treatment against topical infections. PMID:26555982
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
Optimization of Heat Exchangers
Ivan Catton
2010-10-01
The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.
NASA Technical Reports Server (NTRS)
Demmel, J.; Lafferriere, G.
1989-01-01
Consideration is given to the problem of optimal force distribution among three point fingers holding a planar object. A scheme that reduces the nonlinear optimization problem to an easily solved generalized eigenvalue problem is proposed. This scheme generalizes and simplifies results of Ji and Roth (1988). The generalizations include all possible geometric arrangements and extensions to three dimensions and to the case of variable coefficients of friction. For the two-dimensional case with constant coefficients of friction, it is proved that, except for some special cases, the optimal grasping forces (in the sense of minimizing the dependence on friction) are those for which the angles with the corresponding normals are all equal (in absolute value).
Optimal symmetric flight studies
NASA Technical Reports Server (NTRS)
Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.
1985-01-01
Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
2013-08-26
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬
2013-08-01
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.
Johnson, E.A.; Leung, C.; Schira, J.J.
1983-03-01
A closed loop timing optimization control for an internal combustion engine closed about the instantaneous rotational velocity of the engine's crankshaft is disclosed herein. The optimization control computes from the instantaneous rotational velocity of the engine's crankshaft, a signal indicative of the angle at which the crankshaft has a maximum rotational velocity for the torque impulses imparted to the engine's crankshaft by the burning of an air/fuel mixture in each of the engine's combustion chambers and generates a timing correction signal for each of the engine's combustion chambers. The timing correction signals, applied to the engine timing control, modifies the time at which the ignition signal, injection signals or both are generated such that the rotational velocity of the engine's crankshaft has a maximum value at a predetermined angle for each torque impulse generated optimizing the conversion of the combustion energy to rotational torque.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Optimization of Combinatorial Mutagenesis
NASA Astrophysics Data System (ADS)
Parker, Andrew S.; Griswold, Karl E.; Bailey-Kellogg, Chris
Protein engineering by combinatorial site-directed mutagenesis evaluates a portion of the sequence space near a target protein, seeking variants with improved properties (stability, activity, immunogenicity, etc.). In order to improve the hit-rate of beneficial variants in such mutagenesis libraries, we develop methods to select optimal positions and corresponding sets of the mutations that will be used, in all combinations, in constructing a library for experimental evaluation. Our approach, OCoM (Optimization of Combinatorial Mutagenesis), encompasses both degenerate oligonucleotides and specified point mutations, and can be directed accordingly by requirements of experimental cost and library size. It evaluates the quality of the resulting library by one- and two-body sequence potentials, averaged over the variants. To ensure that it is not simply recapitulating extant sequences, it balances the quality of a library with an explicit evaluation of the novelty of its members. We show that, despite dealing with a combinatorial set of variants, in our approach the resulting library optimization problem is actually isomorphic to single-variant optimization. By the same token, this means that the two-body sequence potential results in an NP-hard optimization problem. We present an efficient dynamic programming algorithm for the one-body case and a practically-efficient integer programming approach for the general two-body case. We demonstrate the effectiveness of our approach in designing libraries for three different case study proteins targeted by previous combinatorial libraries - a green fluorescent protein, a cytochrome P450, and a beta lactamase. We found that OCoM worked quite efficiently in practice, requiring only 1 hour even for the massive design problem of selecting 18 mutations to generate 107 variants of a 443-residue P450. We demonstrate the general ability of OCoM in enabling the protein engineer to explore and evaluate trade-offs between quality and
NASA Astrophysics Data System (ADS)
Klesh, Andrew T.
This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Terascale Optimal PDE Simulations
David Keyes
2009-07-28
The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.
NASA Astrophysics Data System (ADS)
Ouaknin, Gaddiel; Laachi, Nabil; Delaney, Kris; Fredrickson, Glenn; Gibou, Frederic
2016-03-01
Directed self-assembly using block copolymers for positioning vertical interconnect access in integrated circuits relies on the proper shape of a confined domain in which polymers will self-assemble into the targeted design. Finding that shape, i.e., solving the inverse problem, is currently mainly based on trial and error approaches. We introduce a level-set based algorithm that makes use of a shape optimization strategy coupled with self-consistent field theory to solve the inverse problem in an automated way. It is shown that optimal shapes are found for different targeted topologies with accurate placement and distances between the different components.
Optimal Quantum Phase Estimation
Dorner, U.; Smith, B. J.; Lundeen, J. S.; Walmsley, I. A.; Demkowicz-Dobrzanski, R.; Banaszek, K.; Wasilewski, W.
2009-01-30
By using a systematic optimization approach, we determine quantum states of light with definite photon number leading to the best possible precision in optical two-mode interferometry. Our treatment takes into account the experimentally relevant situation of photon losses. Our results thus reveal the benchmark for precision in optical interferometry. Although this boundary is generally worse than the Heisenberg limit, we show that the obtained precision beats the standard quantum limit, thus leading to a significant improvement compared to classical interferometers. We furthermore discuss alternative states and strategies to the optimized states which are easier to generate at the cost of only slightly lower precision.
Space-vehicle trajectories - Optimization
NASA Astrophysics Data System (ADS)
Marec, J. P.
The application of control-theory optimization techniques to the motion of powered vehicles in space is discussed in an analytical review. Problems addressed include the definition of optimal orbital transfer; propulsion-system modeling; parametric optimization and the Hohmann transfer; optimal transfer in general, uniform, and central gravitational fields; and interplanetary rendezvous. Typical numerical results are presented in graphs and briefly characterized.
Colas, Cyril; Garcia, Patrice; Popot, Marie-Agnès; Bonnaire, Yves; Bouchonnet, Stéphane
2008-02-01
Solid-phase extraction cartridges among those usually used for screening in horse doping analyses are tested to optimize the extraction of harpagoside (HS), harpagide (HG), and 8-para-coumaroyl harpagide (8PCHG) from plasma and urine. Extracts are analyzed by liquid chromatography coupled with multi-step tandem mass spectrometry. The extraction process retained for plasma applies BondElut PPL cartridges and provides extraction recoveries between 91% and 93%, with RSD values between 8 and 13% at 0.5 ng/mL. Two different procedures are needed to extract analytes from urine. HS and 8PCHG are extracted using AbsElut Nexus cartridges, with recoveries of 85% and 77%, respectively (RSD between 7% and 19%). The extraction of HG involves the use of two cartridges: BondElut PPL and BondElut C18 HF, with recovery of 75% and RSD between 14% and 19%. The applicability of the extraction methods is determined on authentic equine plasma and urine samples after harpagophytum or harpagoside administration. PMID:18366880
Kong, Xiangqian; Qin, Jie; Li, Zeng; Vultur, Adina; Tong, Linjiang; Feng, Enguang; Rajan, Geena; Liu, Shien; Lu, Junyan; Liang, Zhongjie; Zheng, Mingyue; Zhu, Weiliang; Jiang, Hualiang; Herlyn, Meenhard; Liu, Hong; Marmorstein, Ronen; Luo, Cheng
2012-01-01
Oncogenic mutations in critical nodes of cellular signaling pathways have been associated with tumorigenesis and progression. The B-Raf protein kinase, a key hub in the canonical MAPK signaling cascade, is mutated in a broad range of human cancers and especially in malignant melanoma. The most prevalent B-RafV600E mutant exhibits elevated kinase activity and results in constitutive activation of the MAPK pathway, thus making it a promising drug target for cancer therapy. Herein, we described the development of novel B-RafV600E selective inhibitors via multi-step virtual screening and hierarchical hit optimization. Nine hit compounds with low micromolar IC50 values were identified as B-RafV600E inhibitors through virtual screening. Subsequent scaffold-based analogue searching and medicinal chemistry efforts significantly improved both the inhibitor potency and oncogene selectivity. In particular, compounds 22f and 22q possess nanomolar IC50 values with selectivity for B-RafV600E in vitro and exclusive cytotoxicity against B-RafV600E harboring cancer cells. PMID:22875039
Toward Optimal Transport Networks
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.
2008-01-01
Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.
ERIC Educational Resources Information Center
Homan, Michael; Worley, Penny
This course syllabus describes methods for optimizing online searching, using as an example searching on the National Library of Medicine (NLM) online system. Four major activities considered are the online interview, query analysis and search planning, online interaction, and post-search analysis. Within the context of these activities, concepts…
NASA Astrophysics Data System (ADS)
Huang, Siendong
2009-11-01
The nonlocality of quantum states on a bipartite system \\mathcal {A+B} is tested by comparing probabilistic outcomes of two local observables of different subsystems. For a fixed observable A of the subsystem \\mathcal {A,} its optimal approximate double A' of the other system \\mathcal {B} is defined such that the probabilistic outcomes of A' are almost similar to those of the fixed observable A. The case of σ-finite standard von Neumann algebras is considered and the optimal approximate double A' of an observable A is explicitly determined. The connection between optimal approximate doubles and quantum correlations is explained. Inspired by quantum states with perfect correlation, like Einstein-Podolsky-Rosen states and Bohm states, the nonlocality power of an observable A for general quantum states is defined as the similarity that the outcomes of A look like the properties of the subsystem \\mathcal {B} corresponding to A'. As an application of optimal approximate doubles, maximal Bell correlation of a pure entangled state on \\mathcal {B}(\\mathbb {C}^{2})\\otimes \\mathcal {B}(\\mathbb {C}^{2}) is found explicitly.
Optimization of digital designs
NASA Technical Reports Server (NTRS)
Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)
2009-01-01
An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.
Fourier Series Optimization Opportunity
ERIC Educational Resources Information Center
Winkel, Brian
2008-01-01
This note discusses the introduction of Fourier series as an immediate application of optimization of a function of more than one variable. Specifically, it is shown how the study of Fourier series can be motivated to enrich a multivariable calculus class. This is done through discovery learning and use of technology wherein students build the…
ERIC Educational Resources Information Center
Cody, Martin L.
1974-01-01
Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)
Optimal ciliary beating patterns
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2011-11-01
We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.
Optimizing Conferencing Freeware
ERIC Educational Resources Information Center
Baggaley, Jon; Klaas, Jim; Wark, Norine; Depow, Jim
2005-01-01
The increasing range of options provided by two popular conferencing freeware products, "Yahoo Messenger" and "MSN Messenger," are discussed. Each tool contains features designed primarily for entertainment purposes, which can be customized for use in online education. This report provides suggestions for optimizing the educational potential of…
Accelerating Lead Compound Optimization.
Poh, Alissa
2016-04-01
Chemists at The Scripps Research Institute in La Jolla, CA, and Pfizer's La Jolla Laboratories have devised a new way to rapidly synthesize strained-ring structures, which are increasingly favored to optimize potential drugs. With this method, strain-release amination, Pfizer researchers were able to produce sufficient quantities of a particular structure they needed to evaluate a promising cancer drug candidate.
ERIC Educational Resources Information Center
Simmons, Joseph P.; Massey, Cade
2012-01-01
Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their…
ERIC Educational Resources Information Center
Rebilas, Krzysztof
2013-01-01
Consider a skier who goes down a takeoff ramp, attains a speed "V", and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is [alpha]. What is the optimal angle [alpha] that makes the jump the longest possible for the fixed magnitude of the…
Optimization of Systran System.
ERIC Educational Resources Information Center
Toma, Peter P.; And Others
This report describes an optimization phase of the SYSTRAN (System Translation) machine translation technique. The most distinctive characteristic of SYSTRAN is the absence of pre-editing; the program reads tapes containing raw and unedited Russian texts, carries out dictionary and table lookups, performs all syntactic analysis procedures, and…
Optimization in Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Marsden, Alison L.
2014-01-01
Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.
Optimal GENCO bidding strategy
NASA Astrophysics Data System (ADS)
Gao, Feng
Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed
(Too) optimistic about optimism: the belief that optimism improves performance.
Tenney, Elizabeth R; Logg, Jennifer M; Moore, Don A
2015-03-01
A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person's actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success-unfortunately, people may be overly optimistic about just how much optimism can do.
NASA Astrophysics Data System (ADS)
Spagnolie, Saverio E.; Lauga, Eric
2010-03-01
Motile eukaryotic cells propel themselves in viscous fluids by passing waves of bending deformation down their flagella. An infinitely long flagellum achieves a hydrodynamically optimal low-Reynolds number locomotion when the angle between its local tangent and the swimming direction remains constant along its length. Optimal flagella therefore adopt the shape of a helix in three dimensions (smooth) and that of a sawtooth in two dimensions (nonsmooth). Physically, biological organisms (or engineered microswimmers) must expend internal energy in order to produce the waves of deformation responsible for the motion. Here we propose a physically motivated derivation of the optimal flagellum shape. We determine analytically and numerically the shape of the flagellar wave which leads to the fastest swimming for a given appropriately defined energetic expenditure. Our novel approach is to define an energy which includes not only the work against the surrounding fluid, but also (1) the energy stored elastically in the bending of the flagellum, (2) the energy stored elastically in the internal sliding of the polymeric filaments which are responsible for the generation of the bending waves (microtubules), and (3) the viscous dissipation due to the presence of an internal fluid. This approach regularizes the optimal sawtooth shape for two-dimensional deformation at the expense of a small loss in hydrodynamic efficiency. The optimal waveforms of finite-size flagella are shown to depend on a competition between rotational motions and bending costs, and we observe a surprising bias toward half-integer wave numbers. Their final hydrodynamic efficiencies are above 6%, significantly larger than those of swimming cells, therefore indicating available room for further biological tuning.
An optimal structural design algorithm using optimality criteria
NASA Technical Reports Server (NTRS)
Taylor, J. E.; Rossow, M. P.
1976-01-01
An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.
Optimization of combinatorial mutagenesis.
Parker, Andrew S; Griswold, Karl E; Bailey-Kellogg, Chris
2011-11-01
Protein engineering by combinatorial site-directed mutagenesis evaluates a portion of the sequence space near a target protein, seeking variants with improved properties (e.g., stability, activity, immunogenicity). In order to improve the hit-rate of beneficial variants in such mutagenesis libraries, we develop methods to select optimal positions and corresponding sets of the mutations that will be used, in all combinations, in constructing a library for experimental evaluation. Our approach, OCoM (Optimization of Combinatorial Mutagenesis), encompasses both degenerate oligonucleotides and specified point mutations, and can be directed accordingly by requirements of experimental cost and library size. It evaluates the quality of the resulting library by one- and two-body sequence potentials, averaged over the variants. To ensure that it is not simply recapitulating extant sequences, it balances the quality of a library with an explicit evaluation of the novelty of its members. We show that, despite dealing with a combinatorial set of variants, in our approach the resulting library optimization problem is actually isomorphic to single-variant optimization. By the same token, this means that the two-body sequence potential results in an NP-hard optimization problem. We present an efficient dynamic programming algorithm for the one-body case and a practically-efficient integer programming approach for the general two-body case. We demonstrate the effectiveness of our approach in designing libraries for three different case study proteins targeted by previous combinatorial libraries--a green fluorescent protein, a cytochrome P450, and a beta lactamase. We found that OCoM worked quite efficiently in practice, requiring only 1 hour even for the massive design problem of selecting 18 mutations to generate 10⁷ variants of a 443-residue P450. We demonstrate the general ability of OCoM in enabling the protein engineer to explore and evaluate trade-offs between quality and
Optimal Electric Utility Expansion
1989-10-10
SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansionmore » configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.« less
Cyclone performance and optimization
Leith, D.
1990-06-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. During the past quarter, we have nearly completed modeling work that employs the flow field measurements made during the past six months. In addition, we have begun final work using the results of this project to develop improved design methods for cyclones. This work involves optimization using the Iozia-Leith efficiency model and the Dirgo pressure drop model. This work will be completed this summer. 9 figs.
NEMO Oceanic Model Optimization
NASA Astrophysics Data System (ADS)
Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.
2012-04-01
NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is
Córdova, Natalia; Yee, Debbie; Barto, Andrew G.; Niv, Yael; Botvinick, Matthew M.
2014-01-01
Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily. However, these payoffs depend on the particular way in which actions are organized into a hierarchy, the specific way in which tasks are carved up into subtasks. We provide a mathematical account for what makes some hierarchies better than others, an account that allows an optimal hierarchy to be identified for any set of tasks. We then present results from four behavioral experiments, suggesting that human learners spontaneously discover optimal action hierarchies. PMID:25122479
Optimizing Thomson's jumping ring
NASA Astrophysics Data System (ADS)
Tjossem, Paul J. H.; Brost, Elizabeth C.
2011-04-01
The height to which rings will jump in a Thomson jumping ring apparatus is the central question posed by this popular lecture demonstration. We develop a simple time-averaged inductive-phase-lag model for the dependence of the jump height on the ring material, its mass, and temperature and apply it to measurements of the jump height for a set of rings made by slicing copper and aluminum alloy pipe into varying lengths. The data confirm a peak jump height that grows, narrows, and shifts to smaller optimal mass when the rings are cooled to 77 K. The model explains the ratio of the cooled/warm jump heights for a given ring, the reduction in optimal mass as the ring is cooled, and the shape of the mass resonance. The ring that jumps the highest is found to have a characteristic resistance equal to the inductive reactance of the set of rings.
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
Combinatorial optimization games
Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi
1997-06-01
We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic and complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.
Goldman, A. J.
2006-01-01
Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920
Hydraulic fracture design optimization
Lee, Tae-Soo; Advani, S.H.
1992-01-01
This research and development investigation, sponsored by US DOE and the oil and gas industry, extends previously developed hydraulic fracture geometry models and applied energy related characteristic time concepts towards the optimal design and control of hydraulic fracture geometries. The primary objective of this program is to develop rational criteria, by examining the associated energy rate components during the hydraulic fracture evolution, for the formulation of stimulation treatment design along with real-time fracture configuration interpretation and control.
Hydraulic fracture design optimization
Lee, Tae-Soo; Advani, S.H.
1992-06-01
This research and development investigation, sponsored by US DOE and the oil and gas industry, extends previously developed hydraulic fracture geometry models and applied energy related characteristic time concepts towards the optimal design and control of hydraulic fracture geometries. The primary objective of this program is to develop rational criteria, by examining the associated energy rate components during the hydraulic fracture evolution, for the formulation of stimulation treatment design along with real-time fracture configuration interpretation and control.
Trajectory Optimization: OTIS 4
NASA Technical Reports Server (NTRS)
Riehl, John P.; Sjauw, Waldy K.; Falck, Robert D.; Paris, Stephen W.
2010-01-01
The latest release of the Optimal Trajectories by Implicit Simulation (OTIS4) allows users to simulate and optimize aerospace vehicle trajectories. With OTIS4, one can seamlessly generate optimal trajectories and parametric vehicle designs simultaneously. New features also allow OTIS4 to solve non-aerospace continuous time optimal control problems. The inputs and outputs of OTIS4 have been updated extensively from previous versions. Inputs now make use of objectoriented constructs, including one called a metastring. Metastrings use a greatly improved calculator and common nomenclature to reduce the user s workload. They allow for more flexibility in specifying vehicle physical models, boundary conditions, and path constraints. The OTIS4 calculator supports common mathematical functions, Boolean operations, and conditional statements. This allows users to define their own variables for use as outputs, constraints, or objective functions. The user-defined outputs can directly interface with other programs, such as spreadsheets, plotting packages, and visualization programs. Internally, OTIS4 has more explicit and implicit integration procedures, including high-order collocation methods, the pseudo-spectral method, and several variations of multiple shooting. Users may switch easily between the various methods. Several unique numerical techniques such as automated variable scaling and implicit integration grid refinement, support the integration methods. OTIS4 is also significantly more user friendly than previous versions. The installation process is nearly identical on various platforms, including Microsoft Windows, Apple OS X, and Linux operating systems. Cross-platform scripts also help make the execution of OTIS and post-processing of data easier. OTIS4 is supplied free by NASA and is subject to ITAR (International Traffic in Arms Regulations) restrictions. Users must have a Fortran compiler, and a Python interpreter is highly recommended.
Optimal Centroid Position Estimation
Candy, J V; McClay, W A; Awwal, A S; Ferguson, S W
2004-07-23
The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing online optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.
Optimizing parallel reduction operations
Denton, S.M.
1995-06-01
A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.
Flood Bypass Capacity Optimization
NASA Astrophysics Data System (ADS)
Siclari, A.; Hui, R.; Lund, J. R.
2015-12-01
Large river flows can damage adjacent flood-prone areas, by exceeding river channel and levee capacities. Particularly large floods are difficult to contain in leveed river banks alone. Flood bypasses often can efficiently reduce flood risks, where excess river flow is diverted over a weir to bypasses, that incur much less damage and cost. Additional benefits of bypasses include ecosystem protection, agriculture, groundwater recharge and recreation. Constructing or expanding an existing bypass costs in land purchase easements, and levee setbacks. Accounting for such benefits and costs, this study develops a simple mathematical model for optimizing flood bypass capacity using benefit-cost and risk analysis. Application to the Yolo Bypass, an existing bypass along the Sacramento River in California, estimates optimal capacity that economically reduces flood damage and increases various benefits, especially for agriculture. Land availability is likely to limit bypass expansion. Compensation for landowners could relax such limitations. Other economic values could affect the optimal results, which are shown by sensitivity analysis on major parameters. By including land geography into the model, location of promising capacity expansions can be identified.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
NASA Astrophysics Data System (ADS)
Wymant, Chris
2012-12-01
In supersymmetric models, a large average stop mass MS is well known to both boost the lightest Higgs boson mass mh and also make radiative electroweak symmetry breaking unnaturally tuned. The case of “maximal mixing,” where the stop trilinear mixing term At is set to give At2/MS2=6, allows the stops to be as light as possible for a given mh. Here we make the distinction between minimal MS and optimal naturalness, showing that the latter occurs for less-than-maximal mixing. Lagrange-constrained optimization reveals that the two coincide closely in the Minimal Supersymmetric Standard Model (MSSM)—optimally we have 5
Structural optimization of framed structures using generalized optimality criteria
NASA Technical Reports Server (NTRS)
Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.
1989-01-01
The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.
Sun, Bo; Yu, XiangHui; Yin, Yuhe; Liu, Xintao; Wu, Yongge; Chen, Yan; Zhang, Xizhen; Jiang, Chunlai; Kong, Wei
2013-09-01
The demand for pharmaceutical-grade plasmid DNA in vaccine applications and gene therapy has been increasing in recent years. In the present study, a process consisting of alkaline lysis, tangential flow filtration, purification by anion exchange chromatography, hydrophobic interaction chromatography and size exclusion chromatography was developed. The final product met the requirements for pharmaceutical-grade plasmid DNA. The chromosomal DNA content was <1 μg/mg plasmid DNA, and RNA was not detectable by agarose gel electrophoresis. Moreover, the protein content was <2 μg/mg plasmid DNA, and the endotoxin content was <10 EU/mg plasmid DNA. The process was scaled up to yield 800 mg of pharmaceutical-grade plasmid DNA from approximately 2 kg of bacterial cell paste. The overall yield of the final plasmid DNA reached 48%. Therefore, we have established a rapid and efficient production process for pharmaceutical-grade plasmid DNA.
Liu, Chang; Wang, Xin; Chen, Yuhuang; Hao, Huijing; Li, Xu; Liang, Junrong; Duan, Ran; Li, Chuchu; Zhang, Jing; Shao, Shihe; Jing, Huaiqi
2016-01-01
In many gram negative bacilli, AmpD plays a key role in both cell well-recycling pathway and β-lactamase regulation, inactivation of the ampD causes the accumulation of 1,6-anhydromuropeptides, and results in the ampC overproduction. In Yersinia enterocolitica, the regulation of ampC expression may also rely on the ampR-ampC system, the role of AmpD in this species is still unknown. In this study, three AmpD homologs (AmpD1, AmpD2, and AmpD3) have been identified in complete sequence of strain Y. enterocolitica subsp. palearctica 105.5R(r). To understand the role of three AmpD homologs, several mutant strains were constructed and analyzed where a rare ampC regulation mechanism was observed: low-effective ampD2 and ampD3 cooperate with the high-effective ampD1 in the three levels regulation of ampC expression. Enterobacteriaceae was used to be supposed to regulate ampC expression by two steps, three steps regulation was only observed in Pseudomonas aeruginosa. In this study, we first reported that Enterobacteriaceae Y. enterocolitica can also possess a three steps stepwise regulation mechanism, regulating the ampC expression precisely. PMID:27588018
Properties of Electron-Beam Irradiated CuInSe2 Layers by Multi-Step Sputtering Method.
Kim, Chae-Woong; Kim, Jin Hyeok; Jeong, Chaehwan
2015-10-01
Typically, CuInSe2 (CIS) based thin films for photovoltaic devices are deposited by co-evaporation or by deposition of the metals, followed by treatment in a selenium environment. This article describes CIS films that are instead deposited by DC and RF magnetron sputtering from binary Cu2Se and In2Se3 targets without the supply of selenium. As a novel method, electron beam annealing was used for crystallization of Cu2Se/In2Se3 stacked precursors. The surface, cross-sectional morphology, and compositional ratio of CIS films were investigated to confirm the possibility in crystallization without any addition of selenium. Our work demonstrates that the e-beam annealing method can be a good candidate for the rapid crystallization of Cu-In-Se sputtered precursors.
Jayaram, Smitha; Kapoor, Sabeeta; Dharmesh, Shylaja M
2015-06-25
Corn pectic polysaccharide (COPP) inhibited galectin-3 mediated hemagglutination at Minimum Inhibitory Concentration (MIC) of 4.08 μg/mL as opposed to citrus pectin (25 μg/mL), a well known galectin-3 inhibitor and lactose (4.16 μg/mL)--sugar specific to galectin-3. COPP effectively (72%) inhibited invasion and metastasis in experimental animals. In vivo results were substantiated by modulation of cancer specific markers such as galectin-3, which is a key molecule for initiation of metastatic cascade, vascular endothelial growth factor (VEGF) that enhances angiogenesis, matrix metalloproteinases 2 and 9 that are required for invasion, NF-κB, a transcription factor for proliferative potency of tumor cells and a phosphoglucoisomerase (PGI), the activity of which favors cancer cell growth. Structural characterization studies indicate the active component (relatively less acidic, 0.05 M ammonium carbonate, 160 kDa fraction) which showed antimetastatic potency in vitro with MIC of 0.09 μg/mL, and ∼ 45 fold increase in the activity when compared to that of COPP. Gas liquid chromatographic analysis indicated the presence of rhamnose (1%), arabinose (20%), xylose (3%), mannose (4%), galactose (54%) and uronic acid (10%) in different proportions. However, correlative data attributed galectin-3 inhibitory activity to enhanced levels of arabinose and galactose. FTIR, HPLC and NMR spectroscopic analysis further highlights that COPP is an arabinogalactan with methyl/ethyl esters. It is therefore suggested that the blockade of galectin-3 mediated lung metastasis appears to be a result of an inhibition of mixed functions induced during metastasis. The data signifies the importance of dietary carbohydrate as cancer-preventive agent. Although pectin digestibility and absorption are issues of concern, promising in vivo data provides evidence for the cancer preventive property of corn. The present study reveals for the first time a new component of corn, i.e.,--corn pectin with cancer preventive activity apart from corn starch that has been in wide use for multipurpose health benefits.
Vanotti, Matias B; Millner, Patricia D; Hunt, Patrick G; Ellison, Aprel Q
2005-01-01
Concern has greatly increased about the potential for contamination of water, food, and air by pathogens present in manure. We evaluated pathogen reduction in liquid swine manure in a multi-stage treatment system where first the solids and liquid are separated with polymer, followed by biological nitrogen (N) removal using nitrification and denitrification, and then phosphorus (P) extraction through lime precipitation. Each step of the treatment system was analyzed for Salmonella and microbial indicators of fecal contamination (total coliforms, fecal coliforms, and enterococci). Before treatment, mean concentrations of Salmonella, total coliforms, fecal coliforms, and enterococci were 3.89, 6.79, 6.23 and 5.73 log(10) colony forming units (cfu)/ml, respectively. The flushed manure contained 10,590 mg/l TSS, 8270 mg/l COD, 688 mg/l TKN and 480 mg/l TP, which were reduced >98% by the treatment system. Results showed a consistent trend in reduction of pathogens and microbial indicators as a result of each step in the treatment system. Solid-liquid separation decreased their concentrations by 0.5-1 log(10). Additional biological N removal treatment with alternating anoxic and oxic conditions achieved a higher reduction with average removals of 2.4 log(10) for Salmonella and 4.1-4.5 log(10) for indicator microbes. Subsequent P treatment decreased concentration of Salmonella and pathogen indicators to undetectable level (<0.3 log(10) cfu/ml) due to elevated process pH (10.3). Our results indicate that nitrification/denitrification treatment after solids separation is very effective in reducing pathogens in liquid swine manure and that the phosphorus removal step via alkaline calcium precipitation produces a sanitized effluent which may be important for biosecurity reasons. PMID:15381218
Lloyd-Price, Jason; Tran, Huy; Ribeiro, Andre S.
2016-01-01
Transcription kinetics is limited by its initiation steps, which differ between promoters and with intra- and extracellular conditions. Regulation of these steps allows tuning both the rate and stochasticity of RNA production. We used time-lapse, single-RNA microscopy measurements in live Escherichia coli to study how the rate-limiting steps in initiation of the Plac/ara-1 promoter change with temperature and induction scheme. For this, we compared detailed stochastic models fit to the empirical data in maximum likelihood sense using statistical methods. Using this analysis, we found that temperature affects the rate limiting steps unequally, as nonlinear changes in the closed complex formation suffice to explain the differences in transcription dynamics between conditions. Meanwhile, a similar analysis of the PtetA promoter revealed that it has a different rate limiting step configuration, with temperature regulating different steps. Finally, we used the derived models to explore a possible cause for why the identified steps are preferred as the main cause for behavior modifications with temperature: we find that transcription dynamics is either insensitive or responds reciprocally to changes in the other steps. Our results suggests that different promoters employ different rate limiting step patterns that control not only their rate and variability, but also their sensitivity to environmental changes. PMID:27792724
ERIC Educational Resources Information Center
de Souza, Rebecca; Dauner, Kim Nichols; Goei, Ryan; LaCaille, Lara; Kotowski, Michael R.; Schultz, Jennifer Feenstra; LaCaille, Rick; Versnik Nowak, Amy L.
2014-01-01
Background: Obesity prevention efforts typically involve changing eating and exercise behaviors as well as the physical and social environment in which those behaviors occur. Due to existing social networks, worksites are a logical choice for implementing such interventions. Purpose: This article describes the development and implementation of a…
Pietrzykowski, Andrzej Z.; Spijker, Sabine
2014-01-01
Malfunction of synaptic plasticity in different brain regions, including the amygdala plays a role in impulse control deficits that are characteristics of several psychiatric disorders, such as ADHD, schizophrenia, depression and addiction. Previously, we discovered a locus for impulsivity (Impu1) containing the neuregulin 3 (Nrg3) gene, of which the level of expression determines levels of inhibitory control. MicroRNAs (miRNAs) are potent regulators of gene expression, and have recently emerged as important factors contributing to the development of psychiatric disorders. However, their role in impulsivity, as well as control of Nrg3 expression or malfunction of the amygdala, is not well established. Here, we used the GeneNetwork database of BXD mice to search for correlated traits with impulsivity using an overrepresentation analysis to filter for biologically meaningful traits. We determined that inhibitory control was significantly correlated with expression of miR-190b, -28a, -340, -219a, and -491 in the amygdala, and that the overrepresented correlated traits showed a specific pattern of coregulation with these miRNAs. A bioinformatics analysis identified that miR-190b, by targeting an Nrg3-related network, could affect synaptic plasticity in the amygdala, targeting bot impulsive and compulsive traits. Moreover, miR-28a, -340, -219a, and possibly -491 could act on synaptic function by determining the balance between neuronal outgrowth and differentiation. We propose that these miRNAs are attractive candidates of regulation of amygdala synaptic plasticity, possibly during development but also in maintaining the impulsive phenotype. These results can help us to better understand mechanisms of synaptic dysregulation in psychiatric disorders. PMID:25561905
Evidence of Multi-step Nucleation Leading to Various Crystallization Pathways from an Fe-O-Al Melt
Wang, G. C.; Wang, Q.; Li, S. L.; Ai, X. G.; Fan, C. G.
2014-01-01
The crystallization process from a solution begins with nucleation, which determines the structure and size of the resulting crystals. Further understanding of multi-pathway crystallizations from solution through two-step nucleation mechanisms is needed. This study uses density functional theory to probe the thermodynamic properties of alumina clusters at high temperature and reveals the thermodynamic relationship between these clusters and the saturation levels of dissolved oxygen and aluminum in an Fe–O–Al melt. Based on the thermodynamics of cluster formation and the experimental evidence for both excess oxygen in the Fe-O-Al melt and for alumina with a polycrystalline structure in solidified iron, we demonstrate that the appearance of various types of clusters that depends on the saturation ratio determines the nucleation steps that lead to the various crystallization pathways. Such mechanisms may also be important in nucleation and crystallization from solution. PMID:24866413
Wang, Yue-Si; Li, Xue; Yao, Li; Zhao, Ya-Nan; Pan, Yue-Peng
2009-09-15
In order to understand variations of pH and chemical composition of precipitation in Beijing, 5 precipitation events in summer time of 2007 were sampled step by step sequentially on time, and the variations of pH, EC and the characterization of water-soluble ion such as NH4+, SO4(2-) and NO3(-) were analyzed. The results showed that pH was 5.70 +/- 0.73 in the beginning (before-35 min) and the precipitation was not acidic, but in the steady period (after-35 min) pH was 4.35 +/- 0.56 and the raining water presented acidity actually. Simultaneously, pH, EC and concentration of ions decreased rapidly with the raining continuously, 10-45 min later that became nearly constant and the concentration of water soluble ions decreased 50%-90% compare to beginning. The major pollutants in the precipitation on Aug. 1, which the raining air mass came from the northwest, was low and the concentrations of NH4+, SO4(2-) and NO3- were 65.4, 23.9 and 117.3 microeq/L, respectively; while the major pollutants in the precipitation on Aug. 6, which the air mass came from south, was high and the concentrations of NH4+, SO4(2-) and NO3- were 310.8, 95.7 and 249.8 microeq/L. But acidic precipitation was much higher when the air mass came from south than from northwest. The increasing fine particles from photo-chemical reaction in summer time of Beijing will result that the precipitation of the rain will be more and more acidic.
Analysis of Multi-step Forming of Metallic Bipolar Plate for MCFC Using Various Shapes of Preforms
NASA Astrophysics Data System (ADS)
Lee, Chang-Hwan; Ryu, Seung-Min; Yang, Dong-Yol; Kang, Dong-Woo; Chang, In-Gab; Lee, Tae-Won
2010-06-01
The metallic bipolar plates of a molten carbonate fuel cell (MCFC) consist of a shielded slot plate and a center plate. Among these, the shielded slot plate (the current collector) supports the Membrane Electrode Assembly (MEA) mechanically. The anode gases and the cathode gases pass through a space between individual slot patterns. The catalysts are located in the upper part of the shielded slot plate. Therefore, triple phase boundaries can be generated, and carbonate ions can act as the mobile charge carrier for the MCFC. Due to these properties, the shielded slot plate should have a sheared corrugated pattern. In order to form a sheared corrugated pattern, a slitting process is required during the first stage of the forming process. However, it is not possible to obtain a high aspect ratio in a sheared corrugated trapezoidal pattern due to the plastic strain concentration on the upper round region of the pattern. Therefore additional forming processes are required to form a high aspect-ratio pattern. For example, the two additional processes such as a "stretching process using a preform" and a "final forming process" can be done subsequent to the first slitting process. Before the final forming process, a stretching process, which forms an intermediate shape (perform), can make the strain distribution more uniform. Hence, various examples of performs were evaluated by using FEM simulation employing simplified boundary conditions. Finally, experiments involving microscopic and macroscopic observations using the proposed shape of a preform were conducted to characterize the formability of the sheared corrugated pattern. It was found that the numerical simulations are in good agreement with the experimental results.
A Novel Multi-step Virtual Screening for the Identification of Human and Mouse mPGES-1 Inhibitors.
Corso, G; Alisi, M A; Cazzolla, N; Coletta, I; Furlotti, G; Garofalo, B; Mangano, G; Mancini, F; Vitiello, M; Ombrato, Rosella
2016-09-01
We present here the development of a novel virtual screening protocol combining Structure-based and Ligand-based drug design approaches for the identification of mouse mPGES-1 inhibitors. We used the existing 3D structural data of the murine enzyme to hypothesize the inhibitors binding mode, which was the starting point for docking simulations, shape screening, and pharmacophore hypothesis screening. The protocol allowed the identification of 16 mouse mPGES-1 inhibitors with low micromolar activity, which, notably, also inhibit the human enzyme in the same concentration range. The inhibitors predicted binding mode is expected to be the base for the rational drug design of new potent dual species inhibitors of human and murine mPGES-1. PMID:27546040
Noble, Daniel; Kenna, Margaret A; Dix, Melissa; Skibbens, Robert V; Unal, Elçin; Guacci, Vincent
2006-11-01
Sister chromatid cohesion is established during S phase and maintained until anaphase. The cohesin complex (Mcd1p/Scc1p, Smc1p, Smc3p Irr1p/Scc3p in budding yeast) serves a structural role as it is required at all times when cohesion exists. Pds5p colocalizes temporally and spatially with cohesin on chromosomes but is thought to serve as a regulator of cohesion maintenance during mitosis. In contrast, Ctf7p/Eco1p is required during S phase for establishment but is not required during mitosis. Here we provide genetic and biochemical evidence that the pathways of cohesion establishment and maintenance are intimately linked. Our results show that mutants in ctf7 and pds5 are synthetically lethal. Moreover, over-expression of either CTF7 or PDS5 exhibits reciprocal suppression of the other mutant's temperature sensitivity. The suppression by CTF7 is specific for pds5 mutants as CTF7 over-expression increases the temperature sensitivity of an mcd1 mutant but has no effect on smc1 or smc3 mutants. Three additional findings provide new insights into the process of cohesion establishment. First, over-expression of ctf7 alleles deficient in acetylase activity exhibit significantly reduced suppression of the pds5 mutant but exacerbated toxicity to the mcd1 mutant. Second, using chromosome spreads and chromatin immuno-precipitation, we find either cohesin complex or Pds5p chromosomal localization is altered in ctf7 mutants. Finally, biochemical analysis reveals that Ctf7p and Pds5p coimmunoprecipitate, which physically links these regulators of cohesion establishment and maintenance. We propose a model whereby Ctf7p and Pds5p cooperate to facilitate efficient establishment by mediating changes in cohesin complex on chromosomes after its deposition. PMID:17102636
Cruickshank-Quinn, Charmion; Quinn, Kevin D.; Powell, Roger; Yang, Yanhui; Armstrong, Michael; Mahaffey, Spencer; Reisdorph, Richard; Reisdorph, Nichole
2014-01-01
Metabolomics is an emerging field which enables profiling of samples from living organisms in order to obtain insight into biological processes. A vital aspect of metabolomics is sample preparation whereby inconsistent techniques generate unreliable results. This technique encompasses protein precipitation, liquid-liquid extraction, and solid-phase extraction as a means of fractionating metabolites into four distinct classes. Improved enrichment of low abundance molecules with a resulting increase in sensitivity is obtained, and ultimately results in more confident identification of molecules. This technique has been applied to plasma, bronchoalveolar lavage fluid, and cerebrospinal fluid samples with volumes as low as 50 µl. Samples can be used for multiple downstream applications; for example, the pellet resulting from protein precipitation can be stored for later analysis. The supernatant from that step undergoes liquid-liquid extraction using water and strong organic solvent to separate the hydrophilic and hydrophobic compounds. Once fractionated, the hydrophilic layer can be processed for later analysis or discarded if not needed. The hydrophobic fraction is further treated with a series of solvents during three solid-phase extraction steps to separate it into fatty acids, neutral lipids, and phospholipids. This allows the technician the flexibility to choose which class of compounds is preferred for analysis. It also aids in more reliable metabolite identification since some knowledge of chemical class exists. PMID:25045913
Liu, Chang; Wang, Xin; Chen, Yuhuang; Hao, Huijing; Li, Xu; Liang, Junrong; Duan, Ran; Li, Chuchu; Zhang, Jing; Shao, Shihe; Jing, Huaiqi
2016-01-01
In many gram negative bacilli, AmpD plays a key role in both cell well-recycling pathway and β-lactamase regulation, inactivation of the ampD causes the accumulation of 1,6-anhydromuropeptides, and results in the ampC overproduction. In Yersinia enterocolitica, the regulation of ampC expression may also rely on the ampR-ampC system, the role of AmpD in this species is still unknown. In this study, three AmpD homologs (AmpD1, AmpD2, and AmpD3) have been identified in complete sequence of strain Y. enterocolitica subsp. palearctica 105.5R(r). To understand the role of three AmpD homologs, several mutant strains were constructed and analyzed where a rare ampC regulation mechanism was observed: low-effective ampD2 and ampD3 cooperate with the high-effective ampD1 in the three levels regulation of ampC expression. Enterobacteriaceae was used to be supposed to regulate ampC expression by two steps, three steps regulation was only observed in Pseudomonas aeruginosa. In this study, we first reported that Enterobacteriaceae Y. enterocolitica can also possess a three steps stepwise regulation mechanism, regulating the ampC expression precisely. PMID:27588018
NASA Astrophysics Data System (ADS)
Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.
2013-12-01
Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.
SU-C-BRF-07: A Pattern Fusion Algorithm for Multi-Step Ahead Prediction of Surrogate Motion
Zawisza, I; Yan, H; Yin, F
2014-06-15
Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogate signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.
Hu, Xichan; Liu, Jinqiang; Jun, Hyun-IK; Kim, Jin-Kwang; Qiao, Feng
2016-01-01
Tightly controlled recruitment of telomerase, a low-abundance enzyme, to telomeres is essential for regulated telomere synthesis. Recent studies in human cells revealed that a patch of amino acids in the shelterin component TPP1, called the TEL-patch, is essential for recruiting telomerase to telomeres. However, how TEL-patch—telomerase interaction integrates into the overall orchestration of telomerase regulation at telomeres is unclear. In fission yeast, Tel1ATM/Rad3ATR-mediated phosphorylation of shelterin component Ccq1 during late S phase is involved in telomerase recruitment through promoting the binding of Ccq1 to a telomerase accessory protein Est1. Here, we identify the TEL-patch in Tpz1TPP1, mutations of which lead to decreased telomeric association of telomerase, similar to the phosphorylation-defective Ccq1. Furthermore, we find that telomerase action at telomeres requires formation and resolution of an intermediate state, in which the cell cycle-dependent Ccq1-Est1 interaction is coupled to the TEL-patch—Trt1 interaction, to achieve temporally regulated telomerase elongation of telomeres. DOI: http://dx.doi.org/10.7554/eLife.15470.001 PMID:27253066
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
Optimal Temporal Risk Assessment
Balci, Fuat; Freestone, David; Simen, Patrick; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
Time is an essential feature of most decisions, because the reward earned from decisions frequently depends on the temporal statistics of the environment (e.g., on whether decisions must be made under deadlines). Accordingly, evolution appears to have favored a mechanism that predicts intervals in the seconds to minutes range with high accuracy on average, but significant variability from trial to trial. Importantly, the subjective sense of time that results is sufficiently imprecise that maximizing rewards in decision-making can require substantial behavioral adjustments (e.g., accumulating less evidence for a decision in order to beat a deadline). Reward maximization in many daily decisions therefore requires optimal temporal risk assessment. Here, we review the temporal decision-making literature, conduct secondary analyses of relevant published datasets, and analyze the results of a new experiment. The paper is organized in three parts. In the first part, we review literature and analyze existing data suggesting that animals take account of their inherent behavioral variability (their “endogenous timing uncertainty”) in temporal decision-making. In the second part, we review literature that quantitatively demonstrates nearly optimal temporal risk assessment with sub-second and supra-second intervals using perceptual tasks (with humans and mice) and motor timing tasks (with humans). We supplement this section with original research that tested human and rat performance on a task that requires finding the optimal balance between two time-dependent quantities for reward maximization. This optimal balance in turn depends on the level of timing uncertainty. Corroborating the reviewed literature, humans and rats exhibited nearly optimal temporal risk assessment in this task. In the third section, we discuss the role of timing uncertainty in reward maximization in two-choice perceptual decision-making tasks and review literature that implicates timing uncertainty
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity
Ames Optimized TCA Configuration
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Reuther, James J.; Hicks, Raymond M.
1999-01-01
Configuration design at Ames was carried out with the SYN87-SB (single block) Euler code using a 193 x 49 x 65 C-H grid. The Euler solver is coupled to the constrained (NPSOL) and the unconstrained (QNMDIF) optimization packages. Since the single block grid is able to model only wing-body configurations, the nacelle/diverter effects were included in the optimization process by SYN87's option to superimpose the nacelle/diverter interference pressures on the wing. These interference pressures were calculated using the AIRPLANE code. AIRPLANE is an Euler solver that uses a unstructured tetrahedral mesh and is capable of computations about arbitrary complete configurations. In addition, the buoyancy effects of the nacelle/diverters were also included in the design process by imposing the pressure field obtained during the design process onto the triangulated surfaces of the nacelle/diverter mesh generated by AIRPLANE. The interference pressures and nacelle buoyancy effects are added to the final forces after each flow field calculation. Full details of the (recently enhanced) ghost nacelle capability are given in a related talk. The pseudo nacelle corrections were greatly improved during this design cycle. During the Ref H and Cycle 1 design activities, the nacelles were only translated and pitched. In the cycle 2 design effort the nacelles can translate vertically, and pitch to accommodate the changes in the lower surface geometry. The diverter heights (between their leading and trailing edges) were modified during design as the shape of the lower wing changed, with the drag of the diverter changing accordingly. Both adjoint and finite difference gradients were used during optimization. The adjoint-based gradients were found to give good direction in the design space for configurations near the starting point, but as the design approached a minimum, the finite difference gradients were found to be more accurate. Use of finite difference gradients was limited by the
Multiobjective optimization of temporal processes.
Song, Zhe; Kusiak, Andrew
2010-06-01
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework. PMID:19900853
Combinatorial optimization in foundry practice
NASA Astrophysics Data System (ADS)
Antamoshkin, A. N.; Masich, I. S.
2016-04-01
The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.
Taking Stock of Unrealistic Optimism
Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.
2015-01-01
Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type—the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention. PMID:26045714
ERIC Educational Resources Information Center
Reivich, Karen
2010-01-01
Dictionary definitions of optimism encompass two related concepts. The first of these is a hopeful disposition or a conviction that good will ultimately prevail. The second, broader conception of optimism refers to the belief, or the inclination to believe, that the world is the best of all possible worlds. In psychological research, optimism has…
Optimal Test Construction. Research Report.
ERIC Educational Resources Information Center
Veldkamp, Bernard P.
This paper discusses optimal test construction, which deals with the selection of items from a pool to construct a test that performs optimally with respect to the objective of the test and simultaneously meets all test specifications. Optimal test construction problems can be formulated as mathematical decision models. Algorithms and heuristics…
Metacognitive Control and Optimal Learning
ERIC Educational Resources Information Center
Son, Lisa K.; Sethi, Rajiv
2006-01-01
The notion of optimality is often invoked informally in the literature on metacognitive control. We provide a precise formulation of the optimization problem and show that optimal time allocation strategies depend critically on certain characteristics of the learning environment, such as the extent of time pressure, and the nature of the uptake…
A Primer on Unrealistic Optimism
Shepperd, James A.; Waters, Erika; Weinstein, Neil D.; Klein, William M. P.
2014-01-01
People display unrealistic optimism in their predictions for countless events, believing that their personal future outcomes will be more desirable than can possibly be true. We summarize the vast literature on unrealistic optimism by focusing on four broad questions: What is unrealistic optimism; when does it occur; why does it occur; and what are its consequences. PMID:26089606
Optimality criteria: A basis for multidisciplinary design optimization
NASA Astrophysics Data System (ADS)
Venkayya, V. B.
1989-01-01
This paper presents a generalization of what is frequently referred to in the literature as the optimality criteria approach in structural optimization. This generalization includes a unified presentation of the optimality conditions, the Lagrangian multipliers, and the resizing and scaling algorithms in terms of the sensitivity derivatives of the constraint and objective functions. The by-product of this generalization is the derivation of a set of simple nondimensional parameters which provides significant insight into the behavior of the structure as well as the optimization algorithm. A number of important issues, such as, active and passive variables, constraints and three types of linking are discussed in the context of the present derivation of the optimality criteria approach. The formulation as presented in this paper brings multidisciplinary optimization within the purview of this extremely efficient optimality criteria approach.
Multicriteria VMAT optimization
Craft, David; McQuaid, Dualta; Wala, Jeremiah; Chen, Wei; Salari, Ehsan; Bortfeld, Thomas
2012-02-15
Purpose: To make the planning of volumetric modulated arc therapy (VMAT) faster and to explore the tradeoffs between planning objectives and delivery efficiency. Methods: A convex multicriteria dose optimization problem is solved for an angular grid of 180 equi-spaced beams. This allows the planner to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus organ at risk sparing. The selected plan is then made VMAT deliverable by a fluence map merging and sequencing algorithm, which combines neighboring fluence maps based on a similarity score and then delivers the merged maps together, simplifying delivery. Successive merges are made as long as the dose distribution quality is maintained. The complete algorithm is called VMERGE. Results: VMERGE is applied to three cases: a prostate, a pancreas, and a brain. In each case, the selected Pareto-optimal plan is matched almost exactly with the VMAT merging routine, resulting in a high quality plan delivered with a single arc in less than 5 min on average. Conclusions: VMERGE offers significant improvements over existing VMAT algorithms. The first is the multicriteria planning aspect, which greatly speeds up planning time and allows the user to select the plan, which represents the most desirable compromise between target coverage and organ at risk sparing. The second is the user-chosen epsilon-optimality guarantee of the final VMAT plan. Finally, the user can explore the tradeoff between delivery time and plan quality, which is a fundamental aspect of VMAT that cannot be easily investigated with current commercial planning systems.
Optimizing your reception area.
Lachter, Jesse; Raldow, Ann; Molin, Niki
2012-01-01
Through the optimization of reception areas (waiting rooms), physicians can improve the medical experiences of their patients. A qualitative investigation identified issues relevant to improving the quality of the reception area and was used to develop a thorough questionnaire. Most patients were satisfied with accessibility, reception area conditions, and performance of doctors and nurses. The main reasons for dissatisfaction were due to remediable points. No correlations were found between patient satisfaction and age, sex, or religion. A 36-item checklist for satisfaction with reception areas is offered as a useful tool for health quality self-assessment.
Constructing optimal entanglement witnesses
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Pytel, Justyna; Sarbicki, Gniewomir
2009-12-01
We provide a class of indecomposable entanglement witnesses. In 4×4 case, it reproduces the well-known Breuer-Hall witness. We prove that these witnesses are optimal and atomic, i.e., they are able to detect the “weakest” quantum entanglement encoded into states with positive partial transposition. Equivalently, we provide a construction of indecomposable atomic maps in the algebra of 2k×2k complex matrices. It is shown that their structural physical approximations give rise to entanglement breaking channels. This result supports recent conjecture by Korbicz [Phys. Rev. A 78, 062105 (2008)].
Optimization of radiation protection
Lochard, J.
1981-07-01
The practical and theoretical problems raised by the optimization of radiological protection merit a review of decision-making methods, their relevance, and the way in which they are used in order to better determine what role they should play in the decision-making process. Following a brief summary of the theoretical background of the cost-benefit analysis, we examine the methodological choices implicit in the model presented in the International Commission on Radiological Protection Publication No. 26 and, particularly, the consequences of the theory that the level of radiation protection, the benefits, and the production costs of an activity can be treated separately.
Constructing optimal entanglement witnesses
Chruscinski, Dariusz; Pytel, Justyna; Sarbicki, Gniewomir
2009-12-15
We provide a class of indecomposable entanglement witnesses. In 4x4 case, it reproduces the well-known Breuer-Hall witness. We prove that these witnesses are optimal and atomic, i.e., they are able to detect the 'weakest' quantum entanglement encoded into states with positive partial transposition. Equivalently, we provide a construction of indecomposable atomic maps in the algebra of 2kx2k complex matrices. It is shown that their structural physical approximations give rise to entanglement breaking channels. This result supports recent conjecture by Korbicz et al. [Phys. Rev. A 78, 062105 (2008)].
Design Optimization Toolkit: Users' Manual
Aguilo Valentin, Miguel Alejandro
2014-07-01
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.
An Improved Cockroach Swarm Optimization
Obagbuwa, I. C.; Adewumi, A. O.
2014-01-01
Hunger component is introduced to the existing cockroach swarm optimization (CSO) algorithm to improve its searching ability and population diversity. The original CSO was modelled with three components: chase-swarming, dispersion, and ruthless; additional hunger component which is modelled using partial differential equation (PDE) method is included in this paper. An improved cockroach swarm optimization (ICSO) is proposed in this paper. The performance of the proposed algorithm is tested on well known benchmarks and compared with the existing CSO, modified cockroach swarm optimization (MCSO), roach infestation optimization RIO, and hungry roach infestation optimization (HRIO). The comparison results show clearly that the proposed algorithm outperforms the existing algorithms. PMID:24959611
Synthesizing optimal waste blends
Narayan, V.; Diwekar, W.M.; Hoza, M.
1996-10-01
Vitrification of tank wastes to form glass is a technique that will be used for the disposal of high-level waste at Hanford. Process and storage economics show that minimizing the total number of glass logs produced is the key to keeping cost as low as possible. The amount of glass produced can be reduced by blending of the wastes. The optimal way to combine the tanks to minimize the vole of glass can be determined from a discrete blend calculation. However, this problem results in a combinatorial explosion as the number of tanks increases. Moreover, the property constraints make this problem highly nonconvex where many algorithms get trapped in local minima. In this paper the authors examine the use of different combinatorial optimization approaches to solve this problem. A two-stage approach using a combination of simulated annealing and nonlinear programming (NLP) is developed. The results of different methods such as the heuristics approach based on human knowledge and judgment, the mixed integer nonlinear programming (MINLP) approach with GAMS, and branch and bound with lower bound derived from the structure of the given blending problem are compared with this coupled simulated annealing and NLP approach.
Diffusion with optimal resetting
NASA Astrophysics Data System (ADS)
Evans, Martin R.; Majumdar, Satya N.
2011-10-01
We consider the mean time to absorption by an absorbing target of a diffusive particle with the addition of a process whereby the particle is reset to its initial position with rate r. We consider several generalizations of the model of Evans and Majumdar (2011 Phys. Rev. Lett.106 160601): (i) a space-dependent resetting rate r(x); (ii) resetting to a random position z drawn from a resetting distribution { P}(z); and (iii) a spatial distribution for the absorbing target PT(x). As an example of (i) we show that the introduction of a non-resetting window around the initial position can reduce the mean time to absorption provided that the initial position is sufficiently far from the target. We address the problem of optimal resetting, that is, minimizing the mean time to absorption for a given target distribution. For an exponentially decaying target distribution centred at the origin we show that a transition in the optimal resetting distribution occurs as the target distribution narrows.
Bower, Stanley
2011-12-31
A 5.0L V8 twin-turbocharged direct injection engine was designed, built, and tested for the purpose of assessing the fuel economy and performance in the F-Series pickup of the Dual Fuel engine concept and of an E85 optimized FFV engine. Additionally, production 3.5L gasoline turbocharged direct injection (GTDI) EcoBoost engines were converted to Dual Fuel capability and used to evaluate the cold start emissions and fuel system robustness of the Dual Fuel engine concept. Project objectives were: to develop a roadmap to demonstrate a minimized fuel economy penalty for an F-Series FFV truck with a highly boosted, high compression ratio spark ignition engine optimized to run with ethanol fuel blends up to E85; to reduce FTP 75 energy consumption by 15% - 20% compared to an equally powered vehicle with a current production gasoline engine; and to meet ULEV emissions, with a stretch target of ULEV II / Tier II Bin 4. All project objectives were met or exceeded.
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
Optimal Synchronizability of Bearings
NASA Astrophysics Data System (ADS)
Araújo, N. A. M.; Seybold, H.; Baram, R. M.; Herrmann, H. J.; Andrade, J. S., Jr.
2013-02-01
Bearings are mechanical dissipative systems that, when perturbed, relax toward a synchronized (bearing) state. Here we find that bearings can be perceived as physical realizations of complex networks of oscillators with asymmetrically weighted couplings. Accordingly, these networks can exhibit optimal synchronization properties through fine-tuning of the local interaction strength as a function of node degree [Motter, Zhou, and Kurths, Phys. Rev. E 71, 016116 (2005)PLEEE81539-3755]. We show that, in analogy, the synchronizability of bearings can be maximized by counterbalancing the number of contacts and the inertia of their constituting rotor disks through the mass-radius relation, m˜rα, with an optimal exponent α=α× which converges to unity for a large number of rotors. Under this condition, and regardless of the presence of a long-tailed distribution of disk radii composing the mechanical system, the average participation per disk is maximized and the energy dissipation rate is homogeneously distributed among elementary rotors.
Polynomial optimization techniques for activity scheduling. Optimization based prototype scheduler
NASA Technical Reports Server (NTRS)
Reddy, Surender
1991-01-01
Polynomial optimization techniques for activity scheduling (optimization based prototype scheduler) are presented in the form of the viewgraphs. The following subject areas are covered: agenda; need and viability of polynomial time techniques for SNC (Space Network Control); an intrinsic characteristic of SN scheduling problem; expected characteristics of the schedule; optimization based scheduling approach; single resource algorithms; decomposition of multiple resource problems; prototype capabilities, characteristics, and test results; computational characteristics; some features of prototyped algorithms; and some related GSFC references.
Particle swarm optimization for complex nonlinear optimization problems
NASA Astrophysics Data System (ADS)
Alexandridis, Alex; Famelis, Ioannis Th.; Tsitouras, Charalambos
2016-06-01
This work presents the application of a technique belonging to evolutionary computation, namely particle swarm optimization (PSO), to complex nonlinear optimization problems. To be more specific, a PSO optimizer is setup and applied to the derivation of Runge-Kutta pairs for the numerical solution of initial value problems. The effect of critical PSO operational parameters on the performance of the proposed scheme is thoroughly investigated.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Richard, Morgiane; Fryett, Matthew; Miller, Samantha; Booth, Ian; Grebogi, Celso; Moura, Alessandro
2012-01-01
DNA within cells is subject to damage from various sources. Organisms have evolved a number of mechanisms to repair DNA damage. The activity of repair enzymes carries its own risk, however, because the repair of two nearby lesions may lead to the breakup of DNA and result in cell death. We propose a mathematical theory of the damage and repair process in the important scenario where lesions are caused in bursts. We use this model to show that there is an optimum level of repair enzymes within cells which optimises the cell's response to damage. This optimal level is explained as the best trade-off between fast repair and a low probability of causing double-stranded breaks. We derive our results analytically and test them using stochastic simulations, and compare our predictions with current biological knowledge. PMID:21945337
Optimality in Data Assimilation
NASA Astrophysics Data System (ADS)
Nearing, Grey; Yatheendradas, Soni
2016-04-01
It costs a lot more to develop and launch an earth-observing satellite than it does to build a data assimilation system. As such, we propose that it is important to understand the efficiency of our assimilation algorithms at extracting information from remote sensing retrievals. To address this, we propose that it is necessary to adopt completely general definition of "optimality" that explicitly acknowledges all differences between the parametric constraints of our assimilation algorithm (e.g., Gaussianity, partial linearity, Markovian updates) and the true nature of the environmetnal system and observing system. In fact, it is not only possible, but incredibly straightforward, to measure the optimality (in this more general sense) of any data assimilation algorithm as applied to any intended model or natural system. We measure the information content of remote sensing data conditional on the fact that we are already running a model and then measure the actual information extracted by data assimilation. The ratio of the two is an efficiency metric, and optimality is defined as occurring when the data assimilation algorithm is perfectly efficient at extracting information from the retrievals. We measure the information content of the remote sensing data in a way that, unlike triple collocation, does not rely on any a priori presumed relationship (e.g., linear) between the retrieval and the ground truth, however, like triple-collocation, is insensitive to the spatial mismatch between point-based measurements and grid-scale retrievals. This theory and method is therefore suitable for use with both dense and sparse validation networks. Additionally, the method we propose is *constructive* in the sense that it provides guidance on how to improve data assimilation systems. All data assimilation strategies can be reduced to approximations of Bayes' law, and we measure the fractions of total information loss that are due to individual assumptions or approximations in the
Cyclone performance and optimization
Leith, D.
1989-06-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. We have now received all the equipment necessary for the flow visualization studies described over the last two progress reports. We have begun more detailed studies of the gas flow pattern within cyclones as detailed below. Third, we have begun studies of the effect of particle concentration on cyclone performance. This work is critical to application of our results to commercial operations. 1 fig.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2002-01-14
During the past quarter, float-sink analyses were completed for four of seven circuits evaluated in this project. According to the commercial laboratory, the analyses for the remaining three sites will be finished by mid February 2002. In addition, it was necessary to repeat several of the float-sink tests to resolve problems identified during the analysis of the experimental data. In terms of accomplishments, a website is being prepared to distribute project findings and software to the public. This site will include (i) an operators manual for HMC operation and maintenance (already available in hard copy), (ii) an expert system software package for evaluating and optimizing HMC performance (in development), and (iii) a spreadsheet-based process model for plant designers (in development). Several technology transfer activities were also carried out including the publication of project results in proceedings and the training of plant operations via workshops.
Desalination Plant Optimization
Wilson, J. V.
1992-10-01
MSF21 and VTE21 perform design and costing calculations for multistage flash evaporator (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. An optimization capability is available, if desired. The MSF plant consists of a recovery section, reject section, brine heater, and associated buildings and equipment. Operating costs and direct and indirect capital costs for plant, buildings, site, and intakes are calculated. Computations are based on the first and last stages of each section and a typical middle recovery stage. As a result, the program runs rapidly but does not give stage by stage parameters. The VTE plant consists of vertical tube effects, multistage flash preheater, condenser, and brine heater and associated buildings and equipment. Design computations are done for each vertical tube effect, but preheater computations are based on the first and last stages and a typical middle stage.
Optimized nanoporous materials.
Braun, Paul V.; Langham, Mary Elizabeth; Jacobs, Benjamin W.; Ong, Markus D.; Narayan, Roger J.; Pierson, Bonnie E.; Gittard, Shaun D.; Robinson, David B.; Ham, Sung-Kyoung; Chae, Weon-Sik; Gough, Dara V.; Wu, Chung-An Max; Ha, Cindy M.; Tran, Kim L.
2009-09-01
Nanoporous materials have maximum practical surface areas for electrical charge storage; every point in an electrode is within a few atoms of an interface at which charge can be stored. Metal-electrolyte interfaces make best use of surface area in porous materials. However, ion transport through long, narrow pores is slow. We seek to understand and optimize the tradeoff between capacity and transport. Modeling and measurements of nanoporous gold electrodes has allowed us to determine design principles, including the fact that these materials can deplete salt from the electrolyte, increasing resistance. We have developed fabrication techniques to demonstrate architectures inspired by these principles that may overcome identified obstacles. A key concept is that electrodes should be as close together as possible; this is likely to involve an interpenetrating pore structure. However, this may prove extremely challenging to fabricate at the finest scales; a hierarchically porous structure can be a worthy compromise.
DENSE MEDIA CYCLONE OPTIMIZATION
Gerald H. Luttrell
2002-04-11
The test data obtained from the Baseline Assessment that compares the performance of the density traces to that of different sizes of coal particles is now complete. The experimental results show that the tracer data can indeed be used to accurately predict HMC performance. The following conclusions were drawn: (i) the tracer curve is slightly sharper than curve for coarsest size fraction of coal (probably due to the greater resolution of the tracer technique), (ii) the Ep increases with decreasing coal particle size, and (iii) the Ep values are not excessively large for the well-maintained HMC circuits. The major problems discovered were associated with improper apex-to-vortex finder ratios and particle hang-up due to media segregation. Only one plant yielded test data that were typical of a fully optimized level of performance.
Desalination Plant Optimization
1992-10-01
MSF21 and VTE21 perform design and costing calculations for multistage flash evaporator (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. An optimization capability is available, if desired. The MSF plant consists of a recovery section, reject section, brine heater, and associated buildings and equipment. Operating costs and direct and indirect capital costs for plant, buildings, site, and intakes are calculated. Computations are based on the first and last stages of each section and amore » typical middle recovery stage. As a result, the program runs rapidly but does not give stage by stage parameters. The VTE plant consists of vertical tube effects, multistage flash preheater, condenser, and brine heater and associated buildings and equipment. Design computations are done for each vertical tube effect, but preheater computations are based on the first and last stages and a typical middle stage.« less
Gogol, Manfred
2015-08-01
Stress is a stimulus or incident which has an exogenic or endogenic influence on an organism and leads to a biological and/or psychological adaptation from the organism by adaptation. Stressors can be differentiated by the temporal impact (e.g. acute, chronic or acute on chronic), strength and quality. The consequences of stress exposure and adaptation can be measured at the cellular level and as (sub) clinical manifestations, where this process can be biologically seen as a continuum. Over the course of life there is an accumulation of stress incidents resulting in a diminution of the capability for adaptation and repair mechanisms. By means of various interventions it is possible to improve the individual capability for adaptation but it is not currently definitively possible to disentangle alterations due to ageing and the development of diseases. As a consequence the term "healthy ageing" should be replaced by the concept of "optimal ageing". PMID:26208575
Optimal Foraging by Zooplankton
NASA Astrophysics Data System (ADS)
Garcia, Ricardo; Moss, Frank
2007-03-01
We describe experiments with several species of the zooplankton, Daphnia, while foraging for food. They move in sequences: hop-pause-turn-hop etc. While we have recorded hop lengths, hop times, pause times and turning angles, our focus is on histograms representing the distributions of the turning angles. We find that different species, including adults and juveniles, move with similar turning angle distributions described by exponential functions. Random walk simulations and a theory based on active Brownian particles indicate a maximum in food gathering efficiency at an optimal width of the turning angle distribution. Foraging takes place within a fixed size food patch during a fixed time. We hypothesize that the exponential distributions were selected for survival over evolutionary time scales.
NASA Astrophysics Data System (ADS)
Rebilas, Krzysztof
2013-02-01
Consider a skier who goes down a takeoff ramp, attains a speed V, and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is α. What is the optimal angle α that makes the jump the longest possible for the fixed magnitude of the velocity V? Of course, in practice, this is a very sophisticated problem; the skier's range depends on a variety of complex factors in addition to V and α. However, if we ignore these and assume the jumper is in free fall between the takeoff ramp and the landing point below, the problem becomes an exercise in kinematics that is suitable for introductory-level students. The solution is presented here.
Public optimism towards nanomedicine
Bottini, Massimo; Rosato, Nicola; Gloria, Fulvia; Adanti, Sara; Corradino, Nunziella; Bergamaschi, Antonio; Magrini, Andrea
2011-01-01
Background Previous benefit–risk perception studies and social experiences have clearly demonstrated that any emerging technology platform that ignores benefit–risk perception by citizens might jeopardize its public acceptability and further development. The aim of this survey was to investigate the Italian judgment on nanotechnology and which demographic and heuristic variables were most influential in shaping public perceptions of the benefits and risks of nanotechnology. Methods In this regard, we investigated the role of four demographic (age, gender, education, and religion) and one heuristic (knowledge) predisposing factors. Results The present study shows that gender, education, and knowledge (but not age and religion) influenced the Italian perception of how nanotechnology will (positively or negatively) affect some areas of everyday life in the next twenty years. Furthermore, the picture that emerged from our study is that Italian citizens, despite minimal familiarity with nanotechnology, showed optimism towards nanotechnology applications, especially those related to health and medicine (nanomedicine). The high regard for nanomedicine was tied to the perception of risks associated with environmental and societal implications (division among social classes and increased public expenses) rather than health issues. However, more highly educated people showed greater concern for health issues but this did not decrease their strong belief about the benefits that nanotechnology would bring to medical fields. Conclusion The results reported here suggest that public optimism towards nanomedicine appears to justify increased scientific effort and funding for medical applications of nanotechnology. It also obligates toxicologists, politicians, journalists, entrepreneurs, and policymakers to establish a more responsible dialog with citizens regarding the nature and implications of this emerging technology platform. PMID:22267931
Optimal packings of superballs
NASA Astrophysics Data System (ADS)
Jiao, Y.; Stillinger, F. H.; Torquato, S.
2009-04-01
Dense hard-particle packings are intimately related to the structure of low-temperature phases of matter and are useful models of heterogeneous materials and granular media. Most studies of the densest packings in three dimensions have considered spherical shapes, and it is only more recently that nonspherical shapes (e.g., ellipsoids) have been investigated. Superballs (whose shapes are defined by |x1|2p+|x2|2p+|x3|2p≤1 ) provide a versatile family of convex particles (p≥0.5) with both cubic-like and octahedral-like shapes as well as concave particles (0
optimal ones. The maximal packing density as a function of p is nonanalytic at the sphere point (p=1) and increases dramatically as p moves away from unity. Two more nontrivial nonanalytic behaviors occur at pc∗=1.1509… and po∗=ln3/ln4=0.7924… for “cubic” and “octahedral” superballs, respectively, where different Bravais lattice packings possess the same densities. The packing characteristics determined by the broken rotational symmetry of superballs are similar to but richer than their two-dimensional “superdisk” counterparts [Y. Jiao , Phys. Rev. Lett. 100, 245504 (2008)] and are distinctly different from that of ellipsoid packings. Our candidate optimal superball packings provide a starting point to quantify the equilibrium phase behavior of superball systems, which should deepen our understanding of the statistical thermodynamics of nonspherical-particle systems.
OPTIMAL NETWORK TOPOLOGY DESIGN
NASA Technical Reports Server (NTRS)
Yuen, J. H.
1994-01-01
This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem. PMID:24235281
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1974-01-01
A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.
The Structural Optimization of Trees
NASA Astrophysics Data System (ADS)
Mattheck, C.; Bethge, K.
1998-01-01
Optimization methods are presented for engineering design based on the axiom of uniform stress. The principle of adaptive growth which biological structures use to minimize stress concentrations has been incorporated into a computer-aided optimization (CAO) procedure. Computer-aided optimization offers the advantage of three-dimensional optimization for the purpose of designing more fatigue-resistant components without mathematical sophistication. Another method, called computer-aided internal optimization (CAIO), optimizes the performance of fiber-composite materials by aligning the fiber distribution with the force flow, again mimicking the structure of trees. The lines of force flow, so-called principal stress trajectories, are not subject to shear stresses. Avoiding shear stresses in the technical components can lead to an increase in maximum load capacity. By the use of a new testing device strength distributions in trees can be determined and explained based on a new mechanical wood model.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both
GAPS IN SUPPORT VECTOR OPTIMIZATION
STEINWART, INGO; HUSH, DON; SCOVEL, CLINT; LIST, NICOLAS
2007-01-29
We show that the stopping criteria used in many support vector machine (SVM) algorithms working on the dual can be interpreted as primal optimality bounds which in turn are known to be important for the statistical analysis of SVMs. To this end we revisit the duality theory underlying the derivation of the dual and show that in many interesting cases primal optimality bounds are the same as known dual optimality bounds.
Optimal Reconfiguration of Tetrahedral Formations
NASA Technical Reports Server (NTRS)
Huntington, Geoffrey; Rao, Anil V.; Hughes, Steven P.
2004-01-01
The problem of minimum-fuel formation reconfiguration for the Magnetospheric Multi-Scale (MMS) mission is studied. This reconfiguration trajectory optimization problem can be posed as a nonlinear optimal control problem. In this research, this optimal control problem is solved using a spectral collocation method called the Gauss pseudospectral method. The objective of this research is to provide highly accurate minimum-fuel solutions to the MMS formation reconfiguration problem and to gain insight into the underlying structure of fuel-optimal trajectories.
Structural Optimization in automotive design
NASA Technical Reports Server (NTRS)
Bennett, J. A.; Botkin, M. E.
1984-01-01
Although mathematical structural optimization has been an active research area for twenty years, there has been relatively little penetration into the design process. Experience indicates that often this is due to the traditional layout-analysis design process. In many cases, optimization efforts have been outgrowths of analysis groups which are themselves appendages to the traditional design process. As a result, optimization is often introduced into the design process too late to have a significant effect because many potential design variables have already been fixed. A series of examples are given to indicate how structural optimization has been effectively integrated into the design process.
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
Stochastic Optimization of Complex Systems
Birge, John R.
2014-03-20
This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.
NASA Astrophysics Data System (ADS)
Inanloo, B.
2011-12-01
The Caspian Sea is considered to be the largest inland body of water in the world, which located between the Caucasus Mountains and Central Asia. The Caspian Sea has been a source of the most contentious international conflicts between five littoral states now borders the sea: Azerbaijan, Iran, Kazakhstan, Russia, and Turkmenistan. The conflict over the legal status of this international body of water as an aftermath of the breakup of the Soviet Union in 1991. Since then the parties have been negotiating without coming up with any agreement neither on the ownerships of waters, nor the oil and natural gas beneath them. The number of involved stakeholders, the unusual characteristics of the Caspian Sea in considering it as a lake or a sea, and a large number of external parties are interested in the valuable resources of the Sea has made this conflict complex and unique. This paper intends to apply methods to find the best allocation schemes considering acceptability and stability of selected solution to share the Caspian Sea and its resources fairly and efficiently. Although, there are several allocation methods in solving such allocation problems, however, most of those seek a socially optimal solution that can satisfy majority of criteria or decision makers, while, in practice, especially in multi-nation problems, such solution may not be necessarily a stable solution and to be acceptable to all parties. Hence, there is need to apply a method that considers stability and acceptability of solutions to find a solution with high chance to be agreed upon that. Application of some distance-based methods in studying the Caspian Sea conflict provides some policy insights useful for finding solutions that can resolve the dispute. In this study, we use methods such as Goal Programming, Compromise Programming, and considering stability of solution the logic of Power Index is used to find a division rule that is stable negotiators. The results of this study shows that the
RNA based evolutionary optimization.
Schuster, P
1993-12-01
. Evolutionary optimization of two-letter sequences in thus more difficult than optimization in the world of natural RNA sequences with four bases. This fact might explain the usage of four bases in the genetic language of nature. Finally we study the mapping from RNA sequences into secondary structures and explore the topology of RNA shape space. We find that 'neutral paths' connecting neighbouring sequences with identical structures go very frequently through entire sequence space. Sequences folding into common structures are found everywhere in sequence space.(ABSTRACT TRUNCATED AT 400 WORDS)
Acoustic Radiation Optimization Using the Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
Jeon, Jin-Young; Okuma, Masaaki
The present paper describes a fundamental study on structural bending design to reduce noise using a new evolutionary population-based heuristic algorithm called the particle swarm optimization algorithm (PSOA). The particle swarm optimization algorithm is a parallel evolutionary computation technique proposed by Kennedy and Eberhart in 1995. This algorithm is based on the social behavior models for bird flocking, fish schooling and other models investigated by zoologists. Optimal structural design problems to reduce noise are highly nonlinear, so that most conventional methods are difficult to apply. The present paper investigates the applicability of PSOA to such problems. Optimal bending design of a vibrating plate using PSOA is performed in order to minimize noise radiation. PSOA can be effectively applied to such nonlinear acoustic radiation optimization.
Optimized System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Longman, Richard W.
1999-01-01
In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.
Sweeping Jet Optimization Studies
NASA Technical Reports Server (NTRS)
Melton, LaTunia Pack; Koklu, Mehti; Andino, Marlyn; Lin, John C.; Edelman, Louis
2016-01-01
Progress on experimental efforts to optimize sweeping jet actuators for active flow control (AFC) applications with large adverse pressure gradients is reported. Three sweeping jet actuator configurations, with the same orifice size but di?erent internal geometries, were installed on the flap shoulder of an unswept, NACA 0015 semi-span wing to investigate how the output produced by a sweeping jet interacts with the separated flow and the mechanisms by which the flow separation is controlled. For this experiment, the flow separation was generated by deflecting the wing's 30% chord trailing edge flap to produce an adverse pressure gradient. Steady and unsteady pressure data, Particle Image Velocimetry data, and force and moment data were acquired to assess the performance of the three actuator configurations. The actuator with the largest jet deflection angle, at the pressure ratios investigated, was the most efficient at controlling flow separation on the flap of the model. Oil flow visualization studies revealed that the flow field controlled by the sweeping jets was more three-dimensional than expected. The results presented also show that the actuator spacing was appropriate for the pressure ratios examined.
Optimal Phase Oscillatory Network
NASA Astrophysics Data System (ADS)
Follmann, Rosangela
2013-03-01
Important topics as preventive detection of epidemics, collective self-organization, information flow and systemic robustness in clusters are typical examples of processes that can be studied in the context of the theory of complex networks. It is an emerging theory in a field, which has recently attracted much interest, involving the synchronization of dynamical systems associated to nodes, or vertices, of the network. Studies have shown that synchronization in oscillatory networks depends not only on the individual dynamics of each element, but also on the combination of the topology of the connections as well as on the properties of the interactions of these elements. Moreover, the response of the network to small damages, caused at strategic points, can enhance the global performance of the whole network. In this presentation we explore an optimal phase oscillatory network altered by an additional term in the coupling function. The application to associative-memory network shows improvement on the correct information retrieval as well as increase of the storage capacity. The inclusion of some small deviations on the nodes, when solutions are attracted to a false state, results in additional enhancement of the performance of the associative-memory network. Supported by FAPESP - Sao Paulo Research Foundation, grant number 2012/12555-4
Cyclone performance and optimization
Leith, D.
1989-03-15
The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, we have been hampered somewhat by flow delivery of the bubble generation system and arc lighting system placed on order last fall. This equipment is necessary to map the flow field within cyclones using the techniques described in last quarter's report. Using the bubble generator, we completed this quarter a study of the natural length'' of cyclones of 18 different configurations, each configuration operated at five different gas flows. Results suggest that the equation by Alexander for natural length is incorrect; natural length as measured with the bubble generation system is always below the bottom of the cyclones regardless of the cyclone configuration or gas flow, within the limits of the experimental cyclones tested. This finding is important because natural length is a term in equations used to predict cyclone efficiency. 1 tab.
Powers, Tom
2013-09-01
This work describes preliminary results of a new software tool that allows one to vary parameters and understand the effects on the optimized costs of construction plus 10 year operations of an SRF linac, the associated cryogenic facility, and controls, where operations includes the cost of the electrical utilities but not the labor or other costs. It derives from collaborative work done with staff from Accelerator Science and Technology Centre, Daresbury, UK several years ago while they were in the process of developing a conceptual design for the New Light Source project.[1] The initial goal was to convert a spread sheet format to a graphical interface to allow the ability to sweep different parameter sets. The tools also allow one to compare the cost of the different facets of the machine design and operations so as to better understand the tradeoffs. The work was first published in an ICFA Beam Dynamics News Letter.[2] More recent additions to the software include the ability to save and restore input parameters as well as to adjust the Qo versus E parameters in order to explore the potential costs savings associated with doing so. Additionally, program changes now allow one to model the costs associated with a linac that makes use of energy recovery mode of operation.
Induction technology optimization code
Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.
1992-08-21
A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. The Induction Technology Optimization Study (ITOS) was undertaken to examine viable combinations of a linear induction accelerator and a relativistic klystron (RK) for high power microwave production. It is proposed, that microwaves from the RK will power a high-gradient accelerator structure for linear collider development. Previous work indicates that the RK will require a nominal 3-MeV, 3-kA electron beam with a 100-ns flat top. The proposed accelerator-RK combination will be a high average power system capable of sustained microwave output at a 300-Hz pulse repetition frequency. The ITOS code models many combinations of injector, accelerator, and pulse power designs that will supply an RK with the beam parameters described above.
Naclerio, R M
1998-12-01
Full and accurate diagnosis of allergic rhinitis is important as a basis for treatment decisions, as many nasal disorders have similar signs and symptoms. Optimal allergen avoidance is the starting point of treatment, so causative allergens need to be identified. Oral antihistamines are effective in relieving the majority of symptoms of allergic rhinitis and allergic conjunctivitis, but provide only partial relief from nasal congestion. Topical alpha-adrenergic decongestants help to relieve congestion, but prolonged use leads to rhinitis medicamentosa. Systemic decongestants are less effective than topical agents and their use is limited by systemic and central side-effects. The value of leukotriene antagonists has yet to be fully evaluated. Intranasal ipratropium bromide helps to control watery secretions, and an aerosol may be more effective than an aqueous solution. Topical glucocorticosteroids, such as triamcinolone, are the most potent and effective agents available for treating allergic rhinitis. The available evidence indicates that there is very little systemic absorption. Sodium cromoglycate is effective in allergic rhinitis, though less so than topical steroids, and has the least adverse effects among the antiallergic agents. Immunotherapy can be effective and may be indicated in individuals who cannot avoid the causative allergen. Special considerations apply to the treatment of allergic rhinitis in elderly or pregnant patients. Finally, patients with long-standing allergic conditions should be re-assessed regularly.
Boiler modeling optimizes sootblowing
Piboontum, S.J.; Swift, S.M.; Conrad, R.S.
2005-10-01
Controlling the cleanliness and limiting the fouling and slagging of heat transfer surfaces are absolutely necessary to optimize boiler performance. The traditional way to clean heat-transfer surfaces is by sootblowing using air, steam, or water at regular intervals. But with the advent of fuel-switching strategies, such as switching to PRB coal to reduce a plant's emissions, the control of heating surface cleanliness has become more problematic for many owners of steam generators. Boiler modeling can help solve that problem. The article describes Babcock & Wilcox's Powerclean modeling system which consists of heating surface models that produce real-time cleanliness indexes. The Heat Transfer Manager (HTM) program is the core of the system, which can be used on any make or model of boiler. A case study is described to show how the system was successfully used at the 1,350 MW Unit 2 of the American Electric Power's Rockport Power Plant in Indiana. The unit fires a blend of eastern bituminous and Powder River Basin coal. 5 figs.
Industrial cogeneration optimization program
Not Available
1980-01-01
The purpose of this program was to identify up to 10 good near-term opportunities for cogeneration in 5 major energy-consuming industries which produce food, textiles, paper, chemicals, and refined petroleum; select, characterize, and optimize cogeneration systems for these identified opportunities to achieve maximum energy savings for minimum investment using currently available components of cogenerating systems; and to identify technical, institutional, and regulatory obstacles hindering the use of industrial cogeneration systems. The analysis methods used and results obtained are described. Plants with fuel demands from 100,000 Btu/h to 3 x 10/sup 6/ Btu/h were considered. It was concluded that the major impediments to industrial cogeneration are financial, e.g., high capital investment and high charges by electric utilities during short-term cogeneration facility outages. In the plants considered an average energy savings from cogeneration of 15 to 18% compared to separate generation of process steam and electric power was calculated. On a national basis for the 5 industries considered, this extrapolates to saving 1.3 to 1.6 quads per yr or between 630,000 to 750,000 bbl/d of oil. Properly applied, federal activity can do much to realize a substantial fraction of this potential by lowering the barriers to cogeneration and by stimulating wider implementation of this technology. (LCL)
Optimizing management of glycaemia.
Chatterjee, Sudesna; Khunti, Kamlesh; Davies, Melanie J
2016-06-01
The global epidemic of type 2 diabetes (T2DM) continues largely unabated due to an increasingly sedentary lifestyle and obesogenic environment. A cost-effective patient-centred approach, incorporating glucose-lowering therapy and modification of cardiovascular risk factors, could help prevent the inevitable development and progression of macrovascular and microvascular complications. Glycaemic optimization requires patient structured education, self-management and empowerment, and psychological support along with early and proactive use of glucose lowering therapies, which should be delivered in a system of care as shown by the Chronic Care Model. From diagnosis, intensive glycaemic control and individualised care is aimed at reducing complications. In older people, the goal is maintaining quality of life and minimizing morbidity, especially as overtreatment increases hypoglycaemia risk. Maintaining durable glycaemic control is challenging and complex to achieve without hypoglycaemia, weight gain and other significant adverse effects. Overcoming patient and physician barriers can help ensure adequate treatment initiation and intensification. Cardiovascular safety studies with newer glucose-lowering agents are now mandatory, with a sodium glucose co-transporter-2 inhibitor (empagliflozin), and two glucagon like peptide-1 receptor agonists (liraglutide and semaglutide) being the first to demonstrate superior CV outcomes compared with placebo. PMID:27432074
Query Evaluation: Strategies and Optimizations.
ERIC Educational Resources Information Center
Turtle, Howard; Flood, James
1995-01-01
Discusses two query evaluation strategies used in large text retrieval systems: (1) term-at-a-time; and (2) document-at-a-time. Describes optimization techniques that can reduce query evaluation costs. Presents simulation results that compare the performance of these optimization techniques when applied to natural language query evaluation. (JMV)
Aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Murman, E. M.; Chapman, G. T.
1983-01-01
The procedure of using numerical optimization methods coupled with computational fluid dynamic (CFD) codes for the development of an aerodynamic design is examined. Several approaches that replace wind tunnel tests, develop pressure distributions and derive designs, or fulfill preset design criteria are presented. The method of Aerodynamic Design by Numerical Optimization (ADNO) is described and illustrated with examples.
Supply-Chain Optimization Template
NASA Technical Reports Server (NTRS)
Quiett, William F.; Sealing, Scott L.
2009-01-01
The Supply-Chain Optimization Template (SCOT) is an instructional guide for identifying, evaluating, and optimizing (including re-engineering) aerospace- oriented supply chains. The SCOT was derived from the Supply Chain Council s Supply-Chain Operations Reference (SCC SCOR) Model, which is more generic and more oriented toward achieving a competitive advantage in business.
Optimizing Medical Kits for Spaceflight
NASA Technical Reports Server (NTRS)
Keenan, A. B,; Foy, Millennia; Myers, G.
2014-01-01
The Integrated Medical Model (IMM) is a probabilistic model that estimates medical event occurrences and mission outcomes for different mission profiles. IMM simulation outcomes describing the impact of medical events on the mission may be used to optimize the allocation of resources in medical kits. Efficient allocation of medical resources, subject to certain mass and volume constraints, is crucial to ensuring the best outcomes of in-flight medical events. We implement a new approach to this medical kit optimization problem. METHODS We frame medical kit optimization as a modified knapsack problem and implement an algorithm utilizing a dynamic programming technique. Using this algorithm, optimized medical kits were generated for 3 different mission scenarios with the goal of minimizing the probability of evacuation and maximizing the Crew Health Index (CHI) for each mission subject to mass and volume constraints. Simulation outcomes using these kits were also compared to outcomes using kits optimized..RESULTS The optimized medical kits generated by the algorithm described here resulted in predicted mission outcomes more closely approached the unlimited-resource scenario for Crew Health Index (CHI) than the implementation in under all optimization priorities. Furthermore, the approach described here improves upon in reducing evacuation when the optimization priority is minimizing the probability of evacuation. CONCLUSIONS This algorithm provides an efficient, effective means to objectively allocate medical resources for spaceflight missions using the Integrated Medical Model.
Optimal Distinctiveness Signals Membership Trust.
Leonardelli, Geoffrey J; Loyd, Denise Lewin
2016-07-01
According to optimal distinctiveness theory, sufficiently small minority groups are associated with greater membership trust, even among members otherwise unknown, because the groups are seen as optimally distinctive. This article elaborates on the prediction's motivational and cognitive processes and tests whether sufficiently small minorities (defined by relative size; for example, 20%) are associated with greater membership trust relative to mere minorities (45%), and whether such trust is a function of optimal distinctiveness. Two experiments, examining observers' perceptions of minority and majority groups and using minimal groups and (in Experiment 2) a trust game, revealed greater membership trust in minorities than majorities. In Experiment 2, participants also preferred joining minorities over more powerful majorities. Both effects occurred only when minorities were 20% rather than 45%. In both studies, perceptions of optimal distinctiveness mediated effects. Discussion focuses on the value of relative size and optimal distinctiveness, and when membership trust manifests. PMID:27140657
Optimized layout generator for microgyroscope
NASA Astrophysics Data System (ADS)
Tay, Francis E.; Li, Shifeng; Logeeswaran, V. J.; Ng, David C.
2000-10-01
This paper presents an optimized out-of-plane microgyroscope layout generator using AutoCAD R14 and MS ExcelTM as a first attempt to automating the design of resonant micro- inertial sensors. The out-of-plane microgyroscope with two degrees of freedom lumped parameter model was chosen as the synthesis topology. Analytical model for the open loop operating has been derived for the gyroscope performance characteristics. Functional performance parameters such as sensitivity are ensured to be satisfied while simultaneously optimizing a design objective such as minimum area. A single algorithm will optimize the microgyroscope dimensions, while simultaneously maximizing or minimizing the objective functions: maximum sensitivity and minimum area. The multi- criteria objective function and optimization methodology was implemented using the Generalized Reduced Gradient algorithm. For data conversion a DXF to GDS converter was used. The optimized theoretical design performance parameters show good agreement with finite element analysis.
Optimal dynamic detection of explosives
Moore, David Steven; Mcgrane, Shawn D; Greenfield, Margo T; Scharff, R J; Rabitz, Herschel A; Roslund, J
2009-01-01
The detection of explosives is a notoriously difficult problem, especially at stand-off distances, due to their (generally) low vapor pressure, environmental and matrix interferences, and packaging. We are exploring optimal dynamic detection to exploit the best capabilities of recent advances in laser technology and recent discoveries in optimal shaping of laser pulses for control of molecular processes to significantly enhance the standoff detection of explosives. The core of the ODD-Ex technique is the introduction of optimally shaped laser pulses to simultaneously enhance sensitivity of explosives signatures while reducing the influence of noise and the signals from background interferents in the field (increase selectivity). These goals are being addressed by operating in an optimal nonlinear fashion, typically with a single shaped laser pulse inherently containing within it coherently locked control and probe sub-pulses. With sufficient bandwidth, the technique is capable of intrinsically providing orthogonal broad spectral information for data fusion, all from a single optimal pulse.
Risk modelling in portfolio optimization
NASA Astrophysics Data System (ADS)
Lam, W. H.; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-09-01
Risk management is very important in portfolio optimization. The mean-variance model has been used in portfolio optimization to minimize the investment risk. The objective of the mean-variance model is to minimize the portfolio risk and achieve the target rate of return. Variance is used as risk measure in the mean-variance model. The purpose of this study is to compare the portfolio composition as well as performance between the optimal portfolio of mean-variance model and equally weighted portfolio. Equally weighted portfolio means the proportions that are invested in each asset are equal. The results show that the portfolio composition of the mean-variance optimal portfolio and equally weighted portfolio are different. Besides that, the mean-variance optimal portfolio gives better performance because it gives higher performance ratio than the equally weighted portfolio.
Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.
1980-01-01
PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.
Optimality criteria solution strategies in multiple constraint design optimization
NASA Technical Reports Server (NTRS)
Levy, R.; Parzynski, W.
1981-01-01
Procedures and solution strategies are described to solve the conventional structural optimization problem using the Lagrange multiplier technique. The multipliers, obtained through solution of an auxiliary nonlinear optimization problem, lead to optimality criteria to determine the design variables. It is shown that this procedure is essentially equivalent to an alternative formulation using a dual method Lagrangian function objective. Although mathematical formulations are straight-forward, successful applications and computational efficiency depend upon execution procedure strategies. Strategies examined, with application examples, include selection of active constraints, move limits, line search procedures, and side constraint boundaries.
Optimizing WFIRST Coronagraph Science
NASA Astrophysics Data System (ADS)
Macintosh, Bruce
We propose an in-depth scientific investigation that will define how the WFIRST coronagraphic instrument will discover and characterize nearby planetary systems and how it will use observations of planets and disks to probe the diversity of their compositions, dynamics, and formation. Given the enormous diversity of known planetary systems it is not enough to optimize a coronagraph mission plan for the characterization of solar system analogs. Instead, we must design a mission to characterize a wide variety of planets, from gas and ice giant planets at a range of separations to mid-sized planets with no analogs in our solar system. We must consider updated planet distributions based on the results of the Kepler mission, long-term radial velocity (RV) surveys and updated luminosity distributions of exo-zodiacal dust from interferometric thermal infrared surveys of nearby stars. The properties of all these objects must be informed by our best models of planets and disks, and the process of using WFIRST observations to measure fundamental planetary properties such as composition must derive from rigorous methods. Our team brings a great depth of expertise to inform and accomplish these and all of the other tasks enumerated in the SIT proposal call. We will perform end-to-end modeling that starts with model spectra of planets and images of disks, simulates WFIRST data using these models, accounts for geometries of specific star / planet / disk systems, and incorporates detailed instrument performance models. We will develop and implement data analysis techniques to extract well-calibrated astrophysical signals from complex data, and propose observing plans that maximize the mission's scientific yield. We will work with the community to build observing programs and target lists, inform them of WFIRSTs capabilities, and supply simulated scientific observations for data challenges. Our work will be informed by the experience we have gained from building and observing with
Optimal Protocols and Optimal Transport in Stochastic Thermodynamics
NASA Astrophysics Data System (ADS)
Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo
2011-06-01
Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.
Aircraft technology portfolio optimization using ant colony optimization
NASA Astrophysics Data System (ADS)
Villeneuve, Frederic J.; Mavris, Dimitri N.
2012-11-01
Technology portfolio selection is a combinatorial optimization problem often faced with a large number of combinations and technology incompatibilities. The main research question addressed in this article is to determine if Ant Colony Optimization (ACO) is better suited than Genetic Algorithms (GAs) and Simulated Annealing (SA) for technology portfolio optimization when incompatibility constraints between technologies are present. Convergence rate, capability to find optima, and efficiency in handling of incompatibilities are the three criteria of comparison. The application problem consists of finding the best technology portfolio from 29 aircraft technologies. The results show that ACO and GAs converge faster and find optima more easily than SA, and that ACO can optimize portfolios with technology incompatibilities without using penalty functions. This latter finding paves the way for more use of ACO when the number of constraints increases, such as in the technology and concept selection for complex engineering systems.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Metabolism at evolutionary optimal States.
Rabbers, Iraes; van Heerden, Johan H; Nordholt, Niclas; Bachmann, Herwig; Teusink, Bas; Bruggeman, Frank J
2015-01-01
Metabolism is generally required for cellular maintenance and for the generation of offspring under conditions that support growth. The rates, yields (efficiencies), adaptation time and robustness of metabolism are therefore key determinants of cellular fitness. For biotechnological applications and our understanding of the evolution of metabolism, it is necessary to figure out how the functional system properties of metabolism can be optimized, via adjustments of the kinetics and expression of enzymes, and by rewiring metabolism. The trade-offs that can occur during such optimizations then indicate fundamental limits to evolutionary innovations and bioengineering. In this paper, we review several theoretical and experimental findings about mechanisms for metabolic optimization. PMID:26042723
MPQC: Performance Analysis and Optimization
Sarje, Abhinav; Williams, Samuel; Bailey, David
2012-11-30
MPQC (Massively Parallel Quantum Chemistry) is a widely used computational quantum chemistry code. It is capable of performing a number of computations commonly occurring in quantum chemistry. In order to achieve better performance of MPQC, in this report we present a detailed performance analysis of this code. We then perform loop and memory access optimizations, and measure performance improvements by comparing the performance of the optimized code with that of the original MPQC code. We observe that the optimized MPQC code achieves a significant improvement in the performance through a better utilization of vector processing and memory hierarchies.
Metabolism at Evolutionary Optimal States
Rabbers, Iraes; van Heerden, Johan H.; Nordholt, Niclas; Bachmann, Herwig; Teusink, Bas; Bruggeman, Frank J.
2015-01-01
Metabolism is generally required for cellular maintenance and for the generation of offspring under conditions that support growth. The rates, yields (efficiencies), adaptation time and robustness of metabolism are therefore key determinants of cellular fitness. For biotechnological applications and our understanding of the evolution of metabolism, it is necessary to figure out how the functional system properties of metabolism can be optimized, via adjustments of the kinetics and expression of enzymes, and by rewiring metabolism. The trade-offs that can occur during such optimizations then indicate fundamental limits to evolutionary innovations and bioengineering. In this paper, we review several theoretical and experimental findings about mechanisms for metabolic optimization. PMID:26042723
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
An Efficient Chemical Reaction Optimization Algorithm for Multiobjective Optimization.
Bechikh, Slim; Chaabani, Abir; Ben Said, Lamjed
2015-10-01
Recently, a new metaheuristic called chemical reaction optimization was proposed. This search algorithm, inspired by chemical reactions launched during collisions, inherits several features from other metaheuristics such as simulated annealing and particle swarm optimization. This fact has made it, nowadays, one of the most powerful search algorithms in solving mono-objective optimization problems. In this paper, we propose a multiobjective variant of chemical reaction optimization, called nondominated sorting chemical reaction optimization, in an attempt to exploit chemical reaction optimization features in tackling problems involving multiple conflicting criteria. Since our approach is based on nondominated sorting, one of the main contributions of this paper is the proposal of a new quasi-linear average time complexity quick nondominated sorting algorithm; thereby making our multiobjective algorithm efficient from a computational cost viewpoint. The experimental comparisons against several other multiobjective algorithms on a variety of benchmark problems involving various difficulties show the effectiveness and the efficiency of this multiobjective version in providing a well-converged and well-diversified approximation of the Pareto front.
Optimal interdiction of unreactive Markovian evaders
Gutfraind, Alexander; Hagberg, Aric; Pan, Feng
2008-01-01
The network interdiction problem arises in a wide variety of areas including military logistics, infectious disease control and counter-terrorism. In the classical formulation one is given a weighted network G(N, E) and the task is to find b nodes (or edges) whose removal would maximally increase the least-cost path from a source node s to a target node r. In practical applications. G represenLs a transportation or activity network; node/edge removal is done by an agent, the 'interdictor' against another agent the 'evader' who wants to traverse G from s to t along the least-cost route. Our work is motivated by cases in which both agents have bounded rationality: e.g. when the authorities set up road blocks to catch bank robbers, neither party can plot its actions with full information about the other. We introduce a novel model of network interdiction in which the motion of (possibly) several evaders i. described by a Markov pr cess on G.We further suppose that the evaden; do not respond to interdiction decisions because of time, knowledge or computational constraint . We prove that this interdiction problem is NP-hard, like the classical formulation, but unlike the classical problem the objective function is submodular. This implies that the solution could be approximated within 1-lie using a greedy algorithm. Exploiting submodularity again. we demonstrate that a 'priority' (or 'lazy') evaluation algorithm can improve performance by orders of magnitude. Taken together, the results bring closer realistic solutions to the interdiction problem on global-scale networks.
Optimality principles in sensorimotor control.
Todorov, Emanuel
2004-09-01
The sensorimotor system is a product of evolution, development, learning and adaptation-which work on different time scales to improve behavioral performance. Consequently, many theories of motor function are based on 'optimal performance': they quantify task goals as cost functions, and apply the sophisticated tools of optimal control theory to obtain detailed behavioral predictions. The resulting models, although not without limitations, have explained more empirical phenomena than any other class. Traditional emphasis has been on optimizing desired movement trajectories while ignoring sensory feedback. Recent work has redefined optimality in terms of feedback control laws, and focused on the mechanisms that generate behavior online. This approach has allowed researchers to fit previously unrelated concepts and observations into what may become a unified theoretical framework for interpreting motor function. At the heart of the framework is the relationship between high-level goals, and the real-time sensorimotor control strategies most suitable for accomplishing those goals.
Montenegro-Johnson, Thomas D; Lauga, Eric
2014-06-01
Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments. PMID:25019709
Energy Criteria for Resource Optimization
ERIC Educational Resources Information Center
Griffith, J. W.
1973-01-01
Resource optimization in building design is based on the total system over its expected useful life. Alternative environmental systems can be evaluated in terms of resource costs and goal effectiveness. (Author/MF)
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
NASA Astrophysics Data System (ADS)
Montenegro-Johnson, Thomas D.; Lauga, Eric
2014-06-01
Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.
Two concepts of therapeutic optimism.
Jansen, Lynn A
2011-09-01
Researchers and ethicists have long been concerned about the expectations for direct medical benefit expressed by participants in early phase clinical trials. Early work on the issue considered the possibility that participants misunderstand the purpose of clinical research or that they are misinformed about the prospects for medical benefit from these trials. Recently, however, attention has turned to the possibility that research participants are simply expressing optimism or hope about their participation in these trials. The ethical significance of this therapeutic optimism remains unclear. This paper argues that there are two distinct phenomena that can be associated with the term 'therapeutic optimism'-one is ethically benign and the other is potentially worrisome. Distinguishing these two phenomena is crucial for understanding the nature and ethical significance of therapeutic optimism. The failure to draw a distinction between these phenomena also helps to explain why different writers on the topic often speak past one another.
Techniques for shuttle trajectory optimization
NASA Technical Reports Server (NTRS)
Edge, E. R.; Shieh, C. J.; Powers, W. F.
1973-01-01
The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Optimal solar sail planetocentric trajectories
NASA Technical Reports Server (NTRS)
Sackett, L. L.
1977-01-01
The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.
NASA Astrophysics Data System (ADS)
Tarpine, Ryan; Lam, Fumei; Istrail, Sorin
We present results on two classes of problems. The first result addresses the long standing open problem of finding unifying principles for Linkage Disequilibrium (LD) measures in population genetics (Lewontin 1964 [10], Hedrick 1987 [8], Devlin and Risch 1995 [5]). Two desirable properties have been proposed in the extensive literature on this topic and the mutual consistency between these properties has remained at the heart of statistical and algorithmic difficulties with haplotype and genome-wide association study analysis. The first axiom is (1) The ability to extend LD measures to multiple loci as a conservative extension of pairwise LD. All widely used LD measures are pairwise measures. Despite significant attempts, it is not clear how to naturally extend these measures to multiple loci, leading to a "curse of the pairwise". The second axiom is (2) The Interpretability of Intermediate Values. In this paper, we resolve this mutual consistency problem by introducing a new LD measure, directed informativeness overrightarrow{I} (the directed graph theoretic counterpart of the informativeness measure introduced by Halldorsson et al. [6]) and show that it satisfies both of the above axioms. We also show the maximum informative subset of tagging SNPs based on overrightarrow{I} can be computed exactly in polynomial time for realistic genome-wide data. Furthermore, we present polynomial time algorithms for optimal genome-wide tagging SNPs selection for a number of commonly used LD measures, under the bounded neighborhood assumption for linked pairs of SNPs. One problem in the area is the search for a quality measure for tagging SNPs selection that unifies the LD-based methods such as LD-select (implemented in Tagger, de Bakker et al. 2005 [4], Carlson et al. 2004 [3]) and the information-theoretic ones such as informativeness. We show that the objective function of the LD-select algorithm is the Minimal Dominating Set (MDS) on r 2-SNP graphs and show that we can
Geometric optimization of thermal systems
NASA Astrophysics Data System (ADS)
Alebrahim, Asad Mansour
2000-10-01
The work in chapter 1 extends to three dimensions and to convective heat transfer the constructal method of minimizing the thermal resistance between a volume and one point. In the first part, the heat flow mechanism is conduction, and the heat generating volume is occupied by low conductivity material (k 0) and high conductivity inserts (kp) that are shaped as constant-thickness disks mounted on a common stem of kp material. In the second part the interstitial spaces once occupied by k0 material are bathed by forced convection. The internal and external geometric aspect ratios of the elemental volume and the first assembly are optimized numerically subject to volume constraints. Chapter 2 presents the constrained thermodynamic optimization of a cross-flow heat exchanger with ram air on the cold side, which is used in the environmental control systems of aircraft. Optimized geometric features such as the ratio of channel spacings and flow lengths are reported. It is found that the optimized features are relatively insensitive to changes in other physical parameters of the installation and relatively insensitive to the additional irreversibility due to discharging the ram-air stream into the atmosphere, emphasizing the robustness of the thermodynamic optimum. In chapter 3 the problem of maximizing exergy extraction from a hot stream by distributing streams over a heat transfer surface is studied. In the first part, the cold stream is compressed in an isothermal compressor, expanded in an adiabatic turbine, and discharged into the ambient. In the second part, the cold stream is compressed in an adiabatic compressor. Both designs are optimized with respect to the capacity-rate imbalance of the counter-flow and the pressure ratio maintained by the compressor. This study shows the tradeoff between simplicity and increased performance, and outlines the path for further conceptual work on the extraction of exergy from a hot stream that is being cooled gradually. The aim
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY
BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.; ZINSLI, L. C.; CUSACK, L. J.
2012-09-19
THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.
Methods to optimize selective hyperthermia
NASA Astrophysics Data System (ADS)
Cowan, Thomas M.; Bailey, Christopher A.; Liu, Hong; Chen, Wei R.
2003-07-01
Laser immunotherapy, a novel therapy for breast cancer, utilizes selective photothermal interaction to raise the temperature of tumor tissue above the cell damage threshold. Photothermal interaction is achieved with intratumoral injection of a laser absorbing dye followed by non-invasive laser irradiation. When tumor heating is used in combination with immunoadjuvant to stimulate an immune response, anti-tumor immunity can be achieved. In our study, gelatin phantom simulations were used to optimize therapy parameters such as laser power, laser beam radius, and dye concentration to achieve maximum heating of target tissue with the minimum heating of non-targeted tissue. An 805-nm diode laser and indocyanine green (ICG) were used to achieve selective photothermal interactions in a gelatin phantom. Spherical gelatin phantoms containing ICG were used to simulate the absorption-enhanced target tumors, which were embedded inside gelatin without ICG to simulate surrounding non-targeted tissue. Different laser powers and dye concentrations were used to treat the gelatin phantoms. The temperature distributions in the phantoms were measured, and the data were used to determine the optimal parameters used in selective hyperthermia (laser power and dye concentration for this case). The method involves an optimization coefficient, which is proportional to the difference between temperatures measured in targeted and non-targeted gel. The coefficient is also normalized by the difference between the most heated region of the target gel and the least heated region. A positive optimization coefficient signifies a greater temperature increase in targeted gelatin when compared to non-targeted gelatin, and therefore, greater selectivity. Comparisons were made between the optimization coefficients for varying laser powers in order to demonstrate the effectinvess of this method in finding an optimal parameter set. Our experimental results support the proposed use of an optimization
SWOC: Spectral Wavelength Optimization Code
NASA Astrophysics Data System (ADS)
Ruchti, G. R.
2016-06-01
SWOC (Spectral Wavelength Optimization Code) determines the wavelength ranges that provide the optimal amount of information to achieve the required science goals for a spectroscopic study. It computes a figure-of-merit for different spectral configurations using a user-defined list of spectral features, and, utilizing a set of flux-calibrated spectra, determines the spectral regions showing the largest differences among the spectra.
Optimal BLS: Optimizing transit-signal detection for Keplerian dynamics
NASA Astrophysics Data System (ADS)
Ofir, Aviv
2015-08-01
Transit surveys, both ground- and space-based, have already accumulated a large number of light curves that span several years. We optimize the search for transit signals for both detection and computational efficiencies by assuming that the searched systems can be described by Keplerian, and propagating the effects of different system parameters to the detection parameters. Importnantly, we mainly consider the information content of the transit signal and not any specific algorithm - and use BLS (Kovács, Zucker, & Mazeh 2002) just as a specific example.We show that the frequency information content of the light curve is primarily determined by the duty cycle of the transit signal, and thus the optimal frequency sampling is found to be cubic and not linear. Further optimization is achieved by considering duty-cycle dependent binning of the phased light curve. By using the (standard) BLS, one is either fairly insensitive to long-period planets or less sensitive to short-period planets and computationally slower by a significant factor of ~330 (for a 3 yr long dataset). We also show how the physical system parameters, such as the host star's size and mass, directly affect transit detection. This understanding can then be used to optimize the search for every star individually.By considering Keplerian dynamics explicitly rather than implicitly one can optimally search the transit signal parameter space. The presented Optimal BLS enhances the detectability of both very short and very long period planets, while allowing such searches to be done with much reduced resources and time. The Matlab/Octave source code for Optimal BLS is made available.
Unrealistic Optimism: East and West?
Joshi, Mary Sissons; Carter, Wakefield
2013-01-01
Following Weinstein’s (1980) pioneering work many studies established that people have an optimistic bias concerning future life events. At first, the bulk of research was conducted using populations in North America and Northern Europe, the optimistic bias was thought of as universal, and little attention was paid to cultural context. However, construing unrealistic optimism as a form of self-enhancement, some researchers noted that it was far less common in East Asian cultures. The current study extends enquiry to a different non-Western culture. Two hundred and eighty seven middle aged and middle income participants (200 in India, 87 in England) rated 11 positive and 11 negative events in terms of the chances of each event occurring in “their own life,” and the chances of each event occurring in the lives of “people like them.” Comparative optimism was shown for bad events, with Indian participants showing higher levels of optimism than English participants. The position regarding comparative optimism for good events was more complex. In India those of higher socioeconomic status (SES) were optimistic, while those of lower SES were on average pessimistic. Overall, English participants showed neither optimism nor pessimism for good events. The results, whose clinical relevance is discussed, suggest that the expression of unrealistic optimism is shaped by an interplay of culture and socioeconomic circumstance. PMID:23407689
Efficient computation of optimal actions.
Todorov, Emanuel
2009-07-14
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.
Efficient computation of optimal actions
Todorov, Emanuel
2009-01-01
Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress—as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant. PMID:19574462
Optimized quadrature surface coil designs
Kumar, Ananda; Bottomley, Paul A.
2008-01-01
Background Quadrature surface MRI/MRS detectors comprised of circular loop and figure-8 or butterfly-shaped coils offer improved signal-to-noise-ratios (SNR) compared to single surface coils, and reduced power and specific absorption rates (SAR) when used for MRI excitation. While the radius of the optimum loop coil for performing MRI at depth d in a sample is known, the optimum geometry for figure-8 and butterfly coils is not. Materials and methods The geometries of figure-8 and square butterfly detector coils that deliver the optimum SNR are determined numerically by the electromagnetic method of moments. Figure-8 and loop detectors are then combined to create SNR-optimized quadrature detectors whose theoretical and experimental SNR performance are compared with a novel quadrature detector comprised of a strip and a loop, and with two overlapped loops optimized for the same depth at 3 T. The quadrature detection efficiency and local SAR during transmission for the three quadrature configurations are analyzed and compared. Results The SNR-optimized figure-8 detector has loop radius r8 ∼ 0.6d, so r8/r0 ∼ 1.3 in an optimized quadrature detector at 3 T. The optimized butterfly coil has side length ∼ d and crossover angle of ≥ 150° at the center. Conclusions These new design rules for figure-8 and butterfly coils optimize their performance as linear and quadrature detectors. PMID:18057975
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
Optimal control of motorsport differentials
NASA Astrophysics Data System (ADS)
Tremlett, A. J.; Massaro, M.; Purdy, D. J.; Velenis, E.; Assadian, F.; Moore, A. P.; Halley, M.
2015-12-01
Modern motorsport limited slip differentials (LSD) have evolved to become highly adjustable, allowing the torque bias that they generate to be tuned in the corner entry, apex and corner exit phases of typical on-track manoeuvres. The task of finding the optimal torque bias profile under such varied vehicle conditions is complex. This paper presents a nonlinear optimal control method which is used to find the minimum time optimal torque bias profile through a lane change manoeuvre. The results are compared to traditional open and fully locked differential strategies, in addition to considering related vehicle stability and agility metrics. An investigation into how the optimal torque bias profile changes with reduced track-tyre friction is also included in the analysis. The optimal LSD profile was shown to give a performance gain over its locked differential counterpart in key areas of the manoeuvre where a quick direction change is required. The methodology proposed can be used to find both optimal passive LSD characteristics and as the basis of a semi-active LSD control algorithm.
Displacement based multilevel structural optimization
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1995-01-01
Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate
A novel metaheuristic for continuous optimization problems: Virus optimization algorithm
NASA Astrophysics Data System (ADS)
Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue
2016-01-01
A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.
Schedule path optimization for adiabatic quantum computing and optimization
NASA Astrophysics Data System (ADS)
Zeng, Lishan; Zhang, Jun; Sarovar, Mohan
2016-04-01
Adiabatic quantum computing and optimization have garnered much attention recently as possible models for achieving a quantum advantage over classical approaches to optimization and other special purpose computations. Both techniques are probabilistic in nature and the minimum gap between the ground state and first excited state of the system during evolution is a major factor in determining the success probability. In this work we investigate a strategy for increasing the minimum gap and success probability by introducing intermediate Hamiltonians that modify the evolution path between initial and final Hamiltonians. We focus on an optimization problem relevant to recent hardware implementations and present numerical evidence for the existence of a purely local intermediate Hamiltonian that achieve the optimum performance in terms of pushing the minimum gap to one of the end points of the evolution. As a part of this study we develop a convex optimization formulation of the search for optimal adiabatic schedules that makes this computation more tractable, and which may be of independent interest. We further study the effectiveness of random intermediate Hamiltonians on the minimum gap and success probability, and empirically find that random Hamiltonians have a significant probability of increasing the success probability, but only by a modest amount.
Optimal singular control with applications to trajectory optimization
NASA Technical Reports Server (NTRS)
Vinh, N. X.
1977-01-01
A comprehensive discussion of the problem of singular control is presented. Singular control enters an optimal trajectory when the so called switching function vanishes identically over a finite time interval. Using the concept of domain of maneuverability, the problem of optical switching is analyzed. Criteria for the optimal direction of switching are presented. The switching, or junction, between nonsingular and singular subarcs is examined in detail. Several theorems concerning the necessary, and also sufficient conditions for smooth junction are presented. The concepts of quasi-linear control and linearized control are introduced. They are designed for the purpose of obtaining approximate solution for the difficult Euler-Lagrange type of optimal control in the case where the control is nonlinear.
Noncooperatively optimized tolerance: decentralized strategic optimization in complex systems.
Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C; Ruthruff, Joseph R
2011-09-01
We introduce noncooperatively optimized tolerance (NOT), a game theoretic generalization of highly optimized tolerance (HOT), which we illustrate in the forest fire framework. As the number of players increases, NOT retains features of HOT, such as robustness and self-dissimilar landscapes, but also develops features of self-organized criticality. The system retains considerable robustness even as it becomes fractured, due in part to emergent cooperation between players, and at the same time exhibits increasing resilience against changes in the environment, giving rise to intermediate regimes where the system is robust to a particular distribution of adverse events, yet not very fragile to changes. PMID:21981540
Four-body trajectory optimization. [fuel optimal computer programs
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1975-01-01
The two methods which are suitable for use in a 4-body trajectory optimization program are both multiconic methods. They include an approach due to Wilson (1970) and to Byrnes and Hooper (1970) and a procedure developed by Stumpff and Weiss (1968). The various steps in a trajectory optimization program are discussed, giving attention to variable step integration, the correction of errors by quadrature formulas, questions of two-impulse transfer, three-impulse transfer, and two examples which illustrate the implementation of the computational approaches.
Optimization of dish solar collectors
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1983-01-01
Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high. Previously announced in STAR as N83-19224
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Large deviations and portfolio optimization
NASA Astrophysics Data System (ADS)
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Pareto optimal pairwise sequence alignment.
DeRonne, Kevin W; Karypis, George
2013-01-01
Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.
Theory of Optimal Human Motion
NASA Astrophysics Data System (ADS)
Chan, Albert Loongtak
1990-01-01
This thesis presents optimal theories for punching and running. The first is a theory of the optimal karate punch in terms of the duration and the speed of the punch. This theory is solved and compared with experimental data. The theory incorporates the force vs velocity equation (Hill's eq.) and Wilkie's equation for elbow flexation in determining the optimal punch. The time T and the final speed of the punch are dependent on a few physiological parameters for arm muscles. The theoretical punch agrees fairly well with our experiments and other independent experiments. Second, a theory of optimal running is presented, solved and compared with world track records. The theory is similar to Keller's theory for running (1973) except that the power consumed by a runner is assumed to be proportional to the runner's speed v, P = Hv, whereas Keller took P = constant. There are differential equations for velocity and energy, two initial conditions and two constraint inequalities, involving a total of four free parameters. Optimal control techniques are used to solve this problem and minimize the running time T given the race distance D. The resultant predicted times T agree well with the records and the parameter values are consistent with independent physiological measurements.
On optimal velocity during cycling.
Maroński, R
1994-02-01
This paper focuses on the solution of two problems related to cycling. One is to determine the velocity as a function of distance which minimizes the cyclist's energy expenditure in covering a given distance in a set time. The other is to determine the velocity as a function of the distance which minimizes time for fixed energy expenditure. To solve these problems, an equation of motion for the cyclist riding over arbitrary terrain is written using Newton's second law. This equation is used to evaluate either energy expenditure or time, and the minimization problems are solved using an optimal control formulation in conjunction with the method of Miele [Optimization Techniques with Applications to Aerospace Systems, pp. 69-98 (1962) Academic Press, New York]. Solutions to both optimal control problems are the same. The solutions are illustrated through two examples. In one example where the relative wind velocity is zero, the optimal cruising velocity is constant regardless of terrain. In the second, where the relative wind velocity fluctuates, the optimal cruising velocity varies.
Machine Translation Evaluation and Optimization
NASA Astrophysics Data System (ADS)
Dorr, Bonnie; Olive, Joseph; McCary, John; Christianson, Caitlin
The evaluation of machine translation (MT) systems is a vital field of research, both for determining the effectiveness of existing MT systems and for optimizing the performance of MT systems. This part describes a range of different evaluation approaches used in the GALE community and introduces evaluation protocols and methodologies used in the program. We discuss the development and use of automatic, human, task-based and semi-automatic (human-in-the-loop) methods of evaluating machine translation, focusing on the use of a human-mediated translation error rate HTER as the evaluation standard used in GALE. We discuss the workflow associated with the use of this measure, including post editing, quality control, and scoring. We document the evaluation tasks, data, protocols, and results of recent GALE MT Evaluations. In addition, we present a range of different approaches for optimizing MT systems on the basis of different measures. We outline the requirements and specific problems when using different optimization approaches and describe how the characteristics of different MT metrics affect the optimization. Finally, we describe novel recent and ongoing work on the development of fully automatic MT evaluation metrics that have the potential to substantially improve the effectiveness of evaluation and optimization of MT systems.
Optimizing Stellarators for Turbulent Transport
H.E. Mynick, N.Pomphrey, and P. Xanthopoulos
2010-05-27
Up to now, the term "transport-optimized" stellarators has meant optimized to minimize neoclassical transport, while the task of also mitigating turbulent transport, usually the dominant transport channel in such designs, has not been addressed, due to the complexity of plasma turbulence in stellarators. Here, we demonstrate that stellarators can also be designed to mitigate their turbulent transport, by making use of two powerful numerical tools not available until recently, namely gyrokinetic codes valid for 3D nonlinear simulations, and stellarator optimization codes. A first proof-of-principle configuration is obtained, reducing the level of ion temperature gradient turbulent transport from the NCSX baseline design by a factor of about 2.5.
Fuel consumption in optimal control
NASA Technical Reports Server (NTRS)
Redmond, Jim; Silverberg, Larry
1992-01-01
A method has been developed for comparing three optimal control strategies based on fuel consumption. A general cost function minimization procedure was developed by applying two theorems associated with convex sets. Three cost functions associated with control saturation, pseudofuel, and absolute fuel are introduced and minimized. The first two cost functions led to the bang-bang and continuous control strategies, and the minimization of absolute fuel led to an impulsive strategy. The three control strategies were implemented on two elementary systems and a comparison of fuel consumption was made. The impulse control strategy consumes significantly less fuel than the continuous and bang-bang control strategies. This comparison suggests a potential for fuel savings in higher-order systems using impulsive control strategies. However, since exact solutions to fuel-optimal control for large-order systems are difficult if not impossible to achieve, the alternative is to develop near-optimal control strategies.
Optimal randomized scheduling by replacement
Saias, I.
1996-05-01
In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.
Optimality, reduction and collective motion
Justh, Eric W.; Krishnaprasad, P. S.
2015-01-01
The planar self-steering particle model of agents in a collective gives rise to dynamics on the N-fold direct product of SE(2), the rigid motion group in the plane. Assuming a connected, undirected graph of interaction between agents, we pose a family of symmetric optimal control problems with a coupling parameter capturing the strength of interactions. The Hamiltonian system associated with the necessary conditions for optimality is reducible to a Lie–Poisson dynamical system possessing interesting structure. In particular, the strong coupling limit reveals additional (hidden) symmetry, beyond the manifest one used in reduction: this enables explicit integration of the dynamics, and demonstrates the presence of a ‘master clock’ that governs all agents to steer identically. For finite coupling strength, we show that special solutions exist with steering controls proportional across the collective. These results suggest that optimality principles may provide a framework for understanding imitative behaviours observed in certain animal aggregations. PMID:27547087
Integrated solar energy system optimization
NASA Astrophysics Data System (ADS)
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.
Genetic Optimization of Optical Nanoantennas
NASA Astrophysics Data System (ADS)
Forestiere, Carlo; Pasquale, Alyssa; Capretti, Antonio; Lee, Sylvanus; Miano, Giovanni; Tamburrino, Antonello; Dal Negro, Luca
2012-02-01
Metal nanostructures can act as plasmonic nanoantennas (PNAs) due to their unique ability to concentrate the light over sub-wavelength spatial regions. However engineering the optimum PNA in terms of a given quality factor or objective function. We propose a novel design strategy of PNAs by coupling a genetic optimization (GA) tool to the analytical multi-particle Mie theory. The positions and radii of metallic nanosphere clusters are found by requiring maximum electric field enhancement at a given focus point. Within the optimization process we introduced several constraints in order to guarantee the physical realizability of the tailored nanostructure with electron-beam lithography (EBL). Our GA optimization results unveil the central role of the radiative coupling in the design of PNA and open up new exciting pathways in the engineering of metal nanostructures. Samples were fabricated using techniques and surface-enhancement Raman scattering measures were performed confirming the theoretical predictions.
Excitation optimization for damage detection
Bement, Matthew T; Bewley, Thomas R
2009-01-01
A technique is developed to answer the important question: 'Given limited system response measurements and ever-present physical limits on the level of excitation, what excitation should be provided to a system to make damage most detectable?' Specifically, a method is presented for optimizing excitations that maximize the sensitivity of output measurements to perturbations in damage-related parameters estimated with an extended Kalman filter. This optimization is carried out in a computationally efficient manner using adjoint-based optimization and causes the innovations term in the extended Kalman filter to be larger in the presence of estimation errors, which leads to a better estimate of the damage-related parameters in question. The technique is demonstrated numerically on a nonlinear 2 DOF system, where a significant improvement in the damage-related parameter estimation is observed.
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
Accelerating optimization by tracing valley
NASA Astrophysics Data System (ADS)
Li, Qing-Xiao; He, Rong-Qiang; Lu, Zhong-Yi
2016-06-01
We propose an algorithm to accelerate optimization when an objective function locally resembles a long narrow valley. In such a case, a conventional optimization algorithm usually wanders with too many tiny steps in the valley. The new algorithm approximates the valley bottom locally by a parabola that is obtained by fitting a set of successive points generated recently by a conventional optimization method. Then large steps are taken along the parabola, accompanied by fine adjustment to trace the valley bottom. The effectiveness of the new algorithm has been demonstrated by accelerating the Newton trust-region minimization method and the Levenberg-Marquardt method on the nonlinear fitting problem in exact diagonalization dynamical mean-field theory and on the classic minimization problem of the Rosenbrock's function. Many times speedup has been achieved for both problems, showing the high efficiency of the new algorithm.
Optimal shapes for best draining
NASA Astrophysics Data System (ADS)
Sherwood, J. D.
2009-11-01
The container shape that minimizes the volume of draining fluid remaining on the walls of the container after it has been emptied from its base is determined. The film of draining fluid is assumed to wet the walls of the container, and is sufficiently thin so that its curvature may be neglected. Surface tension is ignored. The initial value problem for the thickness of a film of Newtonian fluid is studied, and is shown to lead asymptotically to a similarity solution. From this, and from equivalent solutions for power-law fluids, the volume of the residual film is determined. The optimal container shape is not far from hemispherical, to minimize the surface area, but has a conical base to promote draining. The optimal shape for an axisymmetric mixing vessel, with a hole at the center of its base for draining, is also optimal when inverted in the manner of a washed wine glass inverted and left to drain.
Maneuver Optimization through Simulated Annealing
NASA Astrophysics Data System (ADS)
de Vries, W.
2011-09-01
We developed an efficient method for satellite maneuver optimization. It is based on a Monte Carlo (MC) approach in combination with Simulated Annealing. The former component enables us to consider all imaginable trajectories possible given the current satellite position and its available thrust, while the latter approach ensures that we reliably find the best global optimization solution. Furthermore, this optimization setup is eminently scalable. It runs efficiently on the current multi-core generation of desktop computers, but is equally at home on massively parallel high performance computers (HPC). The baseline method for desktops uses a modified two-body propagator that includes the lunar gravitational force, and corrects for nodal and apsidal precession. For the HPC environment, on the other hand, we can include all the necessary components for a full force-model propagation: higher gravitational moments, atmospheric drag, solar radiation pressure, etc. A typical optimization scenario involves an initial orbit and a destination orbit / trajectory, a time period under consideration, and an available amount of thrust. After selecting a particular optimization (e.g., least amount of fuel, shortest maneuver), the program will determine when and in what direction to burn by what amount. Since we are considering all possible trajectories, we are not constrained to any particular transfer method (e.g., Hohmann transfers). Indeed, in some cases gravitational slingshots around the Earth turn out to be the best result. The paper will describe our approach in detail, its complement of optimizations for single- and multi-burn sequences, and some in-depth examples. In particular, we highlight an example where it is used to analyze a sequence of maneuvers after the fact, as well as showcase its utility as a planning and analysis tool for future maneuvers.
Interaction prediction optimization in multidisciplinary design optimization problems.
Meng, Debiao; Zhang, Xiaoling; Huang, Hong-Zhong; Wang, Zhonglai; Xu, Huanwei
2014-01-01
The distributed strategy of Collaborative Optimization (CO) is suitable for large-scale engineering systems. However, it is hard for CO to converge when there is a high level coupled dimension. Furthermore, the discipline objectives cannot be considered in each discipline optimization problem. In this paper, one large-scale systems control strategy, the interaction prediction method (IPM), is introduced to enhance CO. IPM is utilized for controlling subsystems and coordinating the produce process in large-scale systems originally. We combine the strategy of IPM with CO and propose the Interaction Prediction Optimization (IPO) method to solve MDO problems. As a hierarchical strategy, there are a system level and a subsystem level in IPO. The interaction design variables (including shared design variables and linking design variables) are operated at the system level and assigned to the subsystem level as design parameters. Each discipline objective is considered and optimized at the subsystem level simultaneously. The values of design variables are transported between system level and subsystem level. The compatibility constraints are replaced with the enhanced compatibility constraints to reduce the dimension of design variables in compatibility constraints. Two examples are presented to show the potential application of IPO for MDO.
Optimal sensor placement in structural health monitoring using discrete optimization
NASA Astrophysics Data System (ADS)
Sun, Hao; Büyüköztürk, Oral
2015-12-01
The objective of optimal sensor placement (OSP) is to obtain a sensor layout that gives as much information of the dynamic system as possible in structural health monitoring (SHM). The process of OSP can be formulated as a discrete minimization (or maximization) problem with the sensor locations as the design variables, conditional on the constraint of a given sensor number. In this paper, we propose a discrete optimization scheme based on the artificial bee colony algorithm to solve the OSP problem after first transforming it into an integer optimization problem. A modal assurance criterion-oriented objective function is investigated to measure the utility of a sensor configuration in the optimization process based on the modal characteristics of a reduced order model. The reduced order model is obtained using an iterated improved reduced system technique. The constraint is handled by a penalty term added to the objective function. Three examples, including a 27 bar truss bridge, a 21-storey building at the MIT campus and the 610 m high Canton Tower, are investigated to test the applicability of the proposed algorithm to OSP. In addition, the proposed OSP algorithm is experimentally validated on a physical laboratory structure which is a three-story two-bay steel frame instrumented with triaxial accelerometers. Results indicate that the proposed method is efficient and can be potentially used in OSP in practical SHM.
Is optimism optimal? Functional causes of apparent behavioural biases.
Houston, Alasdair I; Trimmer, Pete C; Fawcett, Tim W; Higginson, Andrew D; Marshall, James A R; McNamara, John M
2012-02-01
We review the use of the terms 'optimism' and 'pessimism' to characterize particular types of behaviour in non-human animals. Animals can certainly behave as though they are optimistic or pessimistic with respect to specific motivations, as documented by an extensive range of examples in the literature. However, in surveying such examples we find that these terms are often poorly defined and are liable to lead to confusion. Furthermore, when considering behaviour within the framework of optimal decision theory using appropriate currencies, it is often misleading to describe animals as optimistic or pessimistic. There are two common misunderstandings. First, some apparent cases of biased behaviour result from misidentifying the currencies and pay-offs the animals should be maximising. Second, actions that do not maximise short-term pay-offs have sometimes been described as optimistic or pessimistic when in fact they are optimal in the long term; we show how such situations can be understood from the perspective of bandit models. Rather than describing suboptimal, unrealistic behaviour, the terms optimism and pessimism are better restricted to informal usage. Our review highlights the importance of choosing the relevant currency when attempting to predict the action of natural selection.
Optimal flow for brown trout: Habitat - prey optimization.
Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria
2016-10-01
The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability. PMID:27320735
Optimal flow for brown trout: Habitat - prey optimization.
Fornaroli, Riccardo; Cabrini, Riccardo; Sartori, Laura; Marazzi, Francesca; Canobbio, Sergio; Mezzanotte, Valeria
2016-10-01
The correct definition of ecosystem needs is essential in order to guide policy and management strategies to optimize the increasing use of freshwater by human activities. Commonly, the assessment of the optimal or minimum flow rates needed to preserve ecosystem functionality has been done by habitat-based models that define a relationship between in-stream flow and habitat availability for various species of fish. We propose a new approach for the identification of optimal flows using the limiting factor approach and the evaluation of basic ecological relationships, considering the appropriate spatial scale for different organisms. We developed density-environment relationships for three different life stages of brown trout that show the limiting effects of hydromorphological variables at habitat scale. In our analyses, we found that the factors limiting the densities of trout were water velocity, substrate characteristics and refugia availability. For all the life stages, the selected models considered simultaneously two variables and implied that higher velocities provided a less suitable habitat, regardless of other physical characteristics and with different patterns. We used these relationships within habitat based models in order to select a range of flows that preserve most of the physical habitat for all the life stages. We also estimated the effect of varying discharge flows on macroinvertebrate biomass and used the obtained results to identify an optimal flow maximizing habitat and prey availability.
Optimal singular control with applications to trajectory optimization
NASA Technical Reports Server (NTRS)
Vinh, N. X.
1979-01-01
The switching conditions are expressed explicitly in terms of the derivatives of the Hamiltonians at the two ends of the switching. A new expression of the Kelley-Contensou necessary condition for the optimality of a singular arc is given. Some examples illustrating the application of the theory are presented.
Thermodynamic Metrics and Optimal Paths
Sivak, David; Crooks, Gavin
2012-05-08
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
An optimal repartitioning decision policy
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.
Design optimization of transonic airfoils
NASA Technical Reports Server (NTRS)
Joh, C.-Y.; Grossman, B.; Haftka, R. T.
1991-01-01
Numerical optimization procedures were considered for the design of airfoils in transonic flow based on the transonic small disturbance (TSD) and Euler equations. A sequential approximation optimization technique was implemented with an accurate approximation of the wave drag based on the Nixon's coordinate straining approach. A modification of the Euler surface boundary conditions was implemented in order to efficiently compute design sensitivities without remeshing the grid. Two effective design procedures producing converged designs in approximately 10 global iterations were developed: interchanging the role of the objective function and constraint and the direct lift maximization with move limits which were fixed absolute values of the design variables.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Configuration optimization of space structures
NASA Technical Reports Server (NTRS)
Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David
1991-01-01
The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.
Adaptive critics for dynamic optimization.
Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar
2010-06-01
A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. PMID:20223635
Computational optimization and biological evolution.
Goryanin, Igor
2010-10-01
Modelling and optimization principles become a key concept in many biological areas, especially in biochemistry. Definitions of objective function, fitness and co-evolution, although they differ between biology and mathematics, are similar in a general sense. Although successful in fitting models to experimental data, and some biochemical predictions, optimization and evolutionary computations should be developed further to make more accurate real-life predictions, and deal not only with one organism in isolation, but also with communities of symbiotic and competing organisms. One of the future goals will be to explain and predict evolution not only for organisms in shake flasks or fermenters, but for real competitive multispecies environments.
CHP Installed Capacity Optimizer Software
2004-11-30
The CHP Installed Capacity Optimizer is a Microsoft Excel spreadsheet application that determines the most economic amount of capacity of distributed generation and thermal utilization equipment (e.g., absorption chillers) to install for any user-defined set of load and cost data. Installing the optimum amount of capacity is critical to the life-cycle economic viability of a distributed generation/cooling heat and power (CHP) application. Using advanced optimization algorithms, the software accesses the loads, utility tariffs, equipment costs,more » etc., and provides to the user the most economic amount of system capacity to install.« less
Optimal Retirement with Increasing Longevity*
Bloom, David E.; Canning, David; Moore, Michael
2014-01-01
We develop an optimizing life-cycle model of retirement with perfect capital markets. We show that longer healthy life expectancy usually leads to later retirement, but with an elasticity less than unity. We calibrate our model using data from the US and find that, over the last century, the effect of rising incomes, which promote early retirement, has dominated the effect of rising lifespans. Our model predicts continuing declines in the optimal retirement age, despite rising life expectancy, provided the rate of real wage growth remains as high as in the last century. PMID:24954970
Enhancing Polyhedral Relaxations for Global Optimization
ERIC Educational Resources Information Center
Bao, Xiaowei
2009-01-01
During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…
Research on optimization-based design
NASA Astrophysics Data System (ADS)
Balling, R. J.; Parkinson, A. R.; Free, J. C.
1989-04-01
Research on optimization-based design is discussed. Illustrative examples are given for cases involving continuous optimization with discrete variables and optimization with tolerances. Approximation of computationally expensive and noisy functions, electromechanical actuator/control system design using decomposition and application of knowledge-based systems and optimization for the design of a valve anti-cavitation device are among the topics covered.
Modular optimization code package: MOZAIK
NASA Astrophysics Data System (ADS)
Bekar, Kursat B.
This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the
Shape Optimization of Swimming Sheets
Wilkening, J.; Hosoi, A.E.
2005-03-01
The swimming behavior of a flexible sheet which moves by propagating deformation waves along its body was first studied by G. I. Taylor in 1951. In addition to being of theoretical interest, this problem serves as a useful model of the locomotion of gastropods and various micro-organisms. Although the mechanics of swimming via wave propagation has been studied extensively, relatively little work has been done to define or describe optimal swimming by this mechanism.We carry out this objective for a sheet that is separated from a rigid substrate by a thin film of viscous Newtonian fluid. Using a lubrication approximation to model the dynamics, we derive the relevant Euler-Lagrange equations to optimize swimming speed and efficiency. The optimization equations are solved numerically using two different schemes: a limited memory BFGS method that uses cubic splines to represent the wave profile, and a multi-shooting Runge-Kutta approach that uses the Levenberg-Marquardt method to vary the parameters of the equations until the constraints are satisfied. The former approach is less efficient but generalizes nicely to the non-lubrication setting. For each optimization problem we obtain a one parameter family of solutions that becomes singular in a self-similar fashion as the parameter approaches a critical value. We explore the validity of the lubrication approximation near this singular limit by monitoring higher order corrections to the zeroth order theory and by comparing the results with finite element solutions of the full Stokes equations.
Optimal timing in biological processes
Williams, B.K.; Nichols, J.D.
1984-01-01
A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.