Illumination system design with multi-step optimization
NASA Astrophysics Data System (ADS)
Magarill, Simon; Cassarly, William J.
2015-08-01
Automatic optimization algorithms can be used when designing illumination systems. For systems with many design variables, optimization using an adjustable set of variables at different steps of the process can provide different local minima. We present a few examples of implementing a multi-step optimization method. We have found that this approach can sometimes lead to more efficient solutions. In this paper we illustrate the effectiveness of using a commercially available optimization algorithm with a slightly modified procedure.
Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft
NASA Astrophysics Data System (ADS)
Rasotto, M.; Armellin, R.; Di Lizia, P.
2016-03-01
An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.
Efficient modularity optimization by multistep greedy algorithm and vertex mover refinement.
Schuetz, Philipp; Caflisch, Amedeo
2008-04-01
Identifying strongly connected substructures in large networks provides insight into their coarse-grained organization. Several approaches based on the optimization of a quality function, e.g., the modularity, have been proposed. We present here a multistep extension of the greedy algorithm (MSG) that allows the merging of more than one pair of communities at each iteration step. The essential idea is to prevent the premature condensation into few large communities. Upon convergence of the MSG a simple refinement procedure called "vertex mover" (VM) is used for reassigning vertices to neighboring communities to improve the final modularity value. With an appropriate choice of the step width, the combined MSG-VM algorithm is able to find solutions of higher modularity than those reported previously. The multistep extension does not alter the scaling of computational cost of the greedy algorithm. PMID:18517695
Faries, Kaitlyn M; Kressel, Lucas L; Dylla, Nicholas P; Wander, Marc J; Hanson, Deborah K; Holten, Dewey; Laible, Philip D; Kirmaier, Christine
2016-02-01
Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P(+)QB(-) yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P(+)QB(-) are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100 fs to 50 μs. The resulting ranking of mutants for their yield of P(+)QB(-) from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P(+)HB(-)→P(+)QB(-) electron transfer or initial P*→P(+)HB(-) conversion highlight unmet challenges of optimizing both processes simultaneously. PMID:26658355
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Cao, Leilei; Xu, Lihong; Goodman, Erik D.
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.
Cao, Leilei; Xu, Lihong; Goodman, Erik D
2016-01-01
A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421
Pugatch, Rami
2015-01-01
Bacterial self-replication is a complex process composed of many de novo synthesis steps catalyzed by a myriad of molecular processing units, e.g., the transcription–translation machinery, metabolic enzymes, and the replisome. Successful completion of all production tasks requires a schedule—a temporal assignment of each of the production tasks to its respective processing units that respects ordering and resource constraints. Most intracellular growth processes are well characterized. However, the manner in which they are coordinated under the control of a scheduling policy is not well understood. When fast replication is favored, a schedule that minimizes the completion time is desirable. However, if resources are scarce, it is typically computationally hard to find such a schedule, in the worst case. Here, we show that optimal scheduling naturally emerges in cellular self-replication. Optimal doubling time is obtained by maintaining a sufficiently large inventory of intermediate metabolites and processing units required for self-replication and additionally requiring that these processing units be “greedy,” i.e., not idle if they can perform a production task. We calculate the distribution of doubling times of such optimally scheduled self-replicating factories, and find it has a universal form—log-Frechet, not sensitive to many microscopic details. Analyzing two recent datasets of Escherichia coli growing in a stationary medium, we find excellent agreement between the observed doubling-time distribution and the predicted universal distribution, suggesting E. coli is optimally scheduling its replication. Greedy scheduling appears as a simple generic route to optimal scheduling when speed is the optimization criterion. Other criteria such as efficiency require more elaborate scheduling policies and tighter regulation. PMID:25675498
Pugatch, Rami
2015-02-24
Bacterial self-replication is a complex process composed of many de novo synthesis steps catalyzed by a myriad of molecular processing units, e.g., the transcription-translation machinery, metabolic enzymes, and the replisome. Successful completion of all production tasks requires a schedule-a temporal assignment of each of the production tasks to its respective processing units that respects ordering and resource constraints. Most intracellular growth processes are well characterized. However, the manner in which they are coordinated under the control of a scheduling policy is not well understood. When fast replication is favored, a schedule that minimizes the completion time is desirable. However, if resources are scarce, it is typically computationally hard to find such a schedule, in the worst case. Here, we show that optimal scheduling naturally emerges in cellular self-replication. Optimal doubling time is obtained by maintaining a sufficiently large inventory of intermediate metabolites and processing units required for self-replication and additionally requiring that these processing units be "greedy," i.e., not idle if they can perform a production task. We calculate the distribution of doubling times of such optimally scheduled self-replicating factories, and find it has a universal form-log-Frechet, not sensitive to many microscopic details. Analyzing two recent datasets of Escherichia coli growing in a stationary medium, we find excellent agreement between the observed doubling-time distribution and the predicted universal distribution, suggesting E. coli is optimally scheduling its replication. Greedy scheduling appears as a simple generic route to optimal scheduling when speed is the optimization criterion. Other criteria such as efficiency require more elaborate scheduling policies and tighter regulation. PMID:25675498
Arabi Jeshvaghani, R.; Zohdi, H.; Shahverdi, H.R.; Bozorg, M.; Hadavi, S.M.M.
2012-11-15
Multi-step heat treatments comprise of high temperature forming (150 Degree-Sign C/24 h plus 190 Degree-Sign C for several minutes) and subsequent low temperature forming (120 Degree-Sign C for 24 h) is developed in creep age forming of 7075 aluminum alloy to decrease springback and exfoliation corrosion susceptibility without reduction in tensile properties. The results show that the multi-step heat treatment gives the low springback and the best combination of exfoliation corrosion resistance and tensile strength. The lower springback is attributed to the dislocation recovery and more stress relaxation at higher temperature. Transmission electron microscopy observations show that corrosion resistance is improved due to the enlargement in the size and the inter-particle distance of the grain boundaries precipitates. Furthermore, the achievement of the high strength is related to the uniform distribution of ultrafine {eta} Prime precipitates within grains. - Highlights: Black-Right-Pointing-Pointer Creep age forming developed for manufacturing of aircraft wing panels by aluminum alloy. Black-Right-Pointing-Pointer A good combination of properties with minimal springback is required in this component. Black-Right-Pointing-Pointer This requirement can be improved through the appropriate heat treatments. Black-Right-Pointing-Pointer Multi-step cycles developed in creep age forming of AA7075 for improving of springback and properties. Black-Right-Pointing-Pointer Results indicate simultaneous enhancing the properties and shape accuracy (lower springback).
NASA Astrophysics Data System (ADS)
Hanin, Leonid; Zaider, Marco
2014-08-01
We revisit a long-standing problem of optimization of fractionated radiotherapy and solve it in considerable generality under the following three assumptions only: (1) repopulation of clonogenic cancer cells between radiation exposures follows linear birth-and-death Markov process; (2) clonogenic cancer cells do not interact with each other; and (3) the dose response function s(D) is decreasing and logarithmically concave. Optimal schedules of fractionated radiation identified in this work can be described by the following ‘greedy’ principle: give the maximum possible dose as soon as possible. This means that upper bounds on the total dose and the dose per fraction reflecting limitations on the damage to normal tissue, along with a lower bound on the time between successive fractions of radiation, determine the optimal radiation schedules completely. Results of this work lead to a new paradigm of dose delivery which we term optimal biologically-based adaptive boosting (OBBAB). It amounts to (a) subdividing the target into regions that are homogeneous with respect to the maximum total dose and maximum dose per fraction allowed by the anatomy and biological properties of the normal tissue within (or adjacent to) the region in question and (b) treating each region with an individual optimal schedule determined by these constraints. The fact that different regions may be treated to different total dose and dose per fraction mean that the number of fractions may also vary between regions. Numerical evidence suggests that OBBAB produces significantly larger tumor control probability than the corresponding conventional treatments.
Chaswal, V.; Yoo, S.; Thomadsen, B. R.; Henderson, D. L.
2007-02-15
The goals of interstitial implant brachytherapy include delivery of the target dose in a uniform manner while sparing sensitive structures, and minimizing the number of needles and sources. We investigated the use of a multi-species source arrangement ({sup 192}Ir with {sup 125}I) for treatment in interstitial prostate brachytherapy. The algorithm utilizes an 'adjoint ratio', which provides a means of ranking source positions and is the criterion for the Greedy Heuristic optimization. Three cases were compared, each using 0.4 mCi {sup 125}I seeds: case I is the base case using {sup 125}I alone, case II uses 0.12 mCi {sup 192}Ir seeds mixed with {sup 125}I, and case III uses 0.25 mCi {sup 192}Ir mixed with {sup 125}I. Both multi-species cases result in lower exposure of the urethra and central prostate region. Compared with the base case, the exposure to the rectum and normal tissue increases by a significant amount for case III as compared with the increase in case II, signifying the effect of slower dose falloff rate of higher energy gammas of {sup 192}Ir in the tissue. The number of seeds and needles decreases in both multi-species cases, with case III requiring fewer seeds and needles than case II. Further, the effect of {sup 192}Ir on uniformity was investigated using the 0.12 mCi {sup 192}Ir seeds in multi-species implants. An increase in uniformity was observed with an increase in the number of 0.12 mCi {sup 192}Ir seeds implanted. The effects of prostate size on the evaluation parameters for multi-species implants were investigated using 0.12 mCi {sup 192}Ir and 0.4 mCi {sup 125}I, and an acceptable treatment plan with increased uniformity was obtained.
Coutu, Diane L
2003-02-01
Americans are outraged at the greediness of Wall Street analysts, dot-com entrepreneurs, and, most of all, chief executive officers. How could Tyco's Dennis Kozlowski use company funds to throw his wife a million-dollar birthday bash on an Italian island? How could Enron's Ken Lay sell thousands of shares of his company's once high-flying stock just before it crashed, leaving employees with nothing? Even America's most popular domestic guru, Martha Stewart, is suspected of having her hand in the cookie jar. To some extent, our outrage may be justified, writes HBR senior editor Diane Coutu. And yet, it's easy to forget that just a couple years ago these same people were lauded as heroes. Many Americans wanted nothing more, in fact, than to emulate them, to share in their fortunes. Indeed, we spent an enormous amount of time talking and thinking about double-digit returns, IPOs, day trading, and stock options. It could easily be argued that it was public indulgence in corporate money lust that largely created the mess we're now in. It's time to take a hard look at greed, both in its general form and in its peculiarly American incarnation, says Coutu. If Federal Reserve Board chairman Alan Greenspan was correct in telling Congress that "infectious greed" contaminated U.S. business, then we need to try to understand its causes--and how the average American may have contributed to it. Why did so many of us fall prey to greed? With a deep, almost reflexive trust in the free market, are Americans somehow greedier than other peoples? And as we look at the wreckage from the 1990s, can we be sure it won't happen again? PMID:12577651
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Multi-step avalanche chambers for final experiment E605
NASA Astrophysics Data System (ADS)
Hubbard, J. R.; Coutrakon, G.; Cribier, M.; Mangeot, Ph.; Martin, H.; Mullié, J.; Palanque, S.; Pelle, J.
1980-10-01
Physical processein multi-step avalanche chambers, detector properties, and difficulties in operation are discussed. Advantages of multi-step chambers over classical MWPCs for specific experimental problems encountered in experiment E605 (high-flux environment and Cherenkov imaging) are described. Some details of detector design are presented.
Multi-step wrought processing of TiAl-based alloys
Fuchs, G.E.
1997-04-01
Wrought processing will likely be needed for fabrication of a variety of TiAl-based alloy structural components. Laboratory and development work has usually relied on one-step forging to produce test material. Attempts to scale-up TiAl-based alloy processing has indicated that multi-step wrought processing is necessary. The purpose of this study was to examine potential multi-step processing routes, such as two-step isothermal forging and extrusion + isothermal forging. The effects of processing (I/M versus P/M), intermediate recrystallization heat treatments and processing route on the tensile and creep properties of Ti-48Al-2Nb-2Cr alloys were examined. The results of the testing were then compared to samples from the same heats of materials processed by one-step routes. Finally, by evaluating the effect of processing on microstructure and properties, optimized and potentially lower cost processing routes could be identified.
Multi-Step Ahead Predictions for Critical Levels in Physiological Time Series.
ElMoaqet, Hisham; Tilbury, Dawn M; Ramachandran, Satya Krishna
2016-07-01
Standard modeling and evaluation methods have been classically used in analyzing engineering dynamical systems where the fundamental problem is to minimize the (mean) error between the real and predicted systems. Although these methods have been applied to multi-step ahead predictions of physiological signals, it is often more important to predict clinically relevant events than just to match these signals. Adverse clinical events, which occur after a physiological signal breaches a clinically defined critical threshold, are a popular class of such events. This paper presents a framework for multi-step ahead predictions of critical levels of abnormality in physiological signals. First, a performance metric is presented for evaluating multi-step ahead predictions. Then, this metric is used to identify personalized models optimized with respect to predictions of critical levels of abnormality. To address the paucity of adverse events, weighted support vector machines and cost-sensitive learning are used to optimize the proposed framework with respect to statistical metrics that can take into account the relative rarity of such events. PMID:27244754
DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL
WIENKE,H.; CAPOTE, R.; HERMAN, M.; SIN, M.
2007-04-22
The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the {sup 232}Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations.
Hypersonic flow over a multi-step afterbody
NASA Astrophysics Data System (ADS)
Menezes, V.; Kumar, S.; Maruta, K.; Reddy, K. P. J.; Takayama, K.
2005-12-01
Effect of a multi-step base on the total drag of a missile shaped body was studied in a shock tunnel at a hypersonic Mach number of 5.75. Total drag over the body was measured using a single component accelerometer force balance. Experimental results indicated a reduction of 8% in total drag over the body with a multi-step base in comparison with the base-line (model with a flat base) configuration.The flow fields around the above bodies were simulated using a 2-D axisymmetric Navier Stokes solver and the simulated results on total drag were compared with the measured results. The simulated flow field pictures give an insight into the involved flow physics.
The convergence of the greedy algorithm with respect to the Haar system in the space L{sub p}(0,1)
Livshits, Evgenii D
2010-02-28
The approximation properties of the X-greedy algorithm in the space L{sub p}(0,1) are studied. For 1 < p < 2 estimates for the rate of convergence of the X-greedy algorithm with respect to the Haar system are obtained that are close to optimal. Bibliography: 18 titles.
Hierarchical multi-step organization during viral capsid assembly.
Lampel, Ayala; Varenik, Maxim; Regev, Oren; Gazit, Ehud
2015-12-01
Formation of the HIV-1 core by the association of capsid proteins is a critical, not fully understood, step in the viral life cycle. Understanding the early stages of the mechanism may improve treatment opportunities. Here, spectroscopic analysis (opacity) is used to follow the kinetics of capsid protein assembly, which shows three stages: a lag phase, followed by a linear increase stage and terminated by a plateau. Adding pre-incubated capsid proteins at the start of the lag phase shortens it and increases the rate of assembly at the linear stage, demonstrating autoacceleration and cooperative assembly. Cryogenic transmission electron microscopy is used to probe structural evolution at these three stages. At the beginning of the lag phase, short tubular assemblies are found alongside micron long tubes. Their elongation continues all throughout the lag phase, at the end of which tubes start to assemble into bundles. Based on these results, we suggest a multi-step self-assembly process including fast nucleation and elongation followed by tubes packing into arrays. PMID:26497114
Collecting reliable clades using the Greedy Strict Consensus Merger
Böcker, Sebastian
2016-01-01
Supertree methods combine a set of phylogenetic trees into a single supertree. Similar to supermatrix methods, these methods provide a way to reconstruct larger parts of the Tree of Life, potentially evading the computational complexity of phylogenetic inference methods such as maximum likelihood. The supertree problem can be formalized in different ways, to cope with contradictory information in the input. Many supertree methods have been developed. Some of them solve NP-hard optimization problems like the well-known Matrix Representation with Parsimony, while others have polynomial worst-case running time but work in a greedy fashion (FlipCut). Both can profit from a set of clades that are already known to be part of the supertree. The Superfine approach shows how the Greedy Strict Consensus Merger (GSCM) can be used as preprocessing to find these clades. We introduce different scoring functions for the GSCM, a randomization, as well as a combination thereof to improve the GSCM to find more clades. This helps, in turn, to improve the resolution of the GSCM supertree. We find this modifications to increase the number of true positive clades by 18% compared to the currently used Overlap scoring. PMID:27375971
P.I. Steven M. Larson MD Co P.I. Nai-Kong Cheung MD, Ph.D.
2009-09-21
The 4 specific aims of this project are: (1) Optimization of MST to increase tumor uptake; (2) Antigen heterogeneity; (3) Characterization and reduction of renal uptake; and (4) Validation in vivo of optimized MST targeted therapy. This proposal focussed upon optimizing multistep immune targeting strategies for the treatment of cancer. Two multi-step targeting constructs were explored during this funding period: (1) anti-Tag-72 and (2) anti-GD2.
Efficient greedy algorithms for economic manpower shift planning
NASA Astrophysics Data System (ADS)
Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.
2015-01-01
Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.
Mitran, T L; Melchert, O; Hartmann, A K
2013-12-01
The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are "greedy" in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of "energy" for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρ(c), resulting from numerically exact and "static" simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems. PMID:24483380
An Experimental Method for the Active Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
Li, Zhengbang; Zhang, Wei; Pan, Dongdong; Li, Qizhai
2016-01-01
Principal component analysis (PCA) is a useful tool to identify important linear combination of correlated variables in multivariate analysis and has been applied to detect association between genetic variants and human complex diseases of interest. How to choose adequate number of principal components (PCs) to represent the original system in an optimal way is a key issue for PCA. Note that the traditional PCA, only using a few top PCs while discarding the other PCs, might significantly lose power in genetic association studies if all the PCs contain non-ignorable signals. In order to make full use of information from all PCs, Aschard and his colleagues have proposed a multi-step combined PCs method (named mCPC) recently, which performs well especially when several traits are highly correlated. However, the power superiority of mCPC has just been illustrated by simulation, while the theoretical power performance of mCPC has not been studied yet. In this work, we attempt to investigate theoretical properties of mCPC and further propose a novel and efficient strategy to combine PCs. Extensive simulation results confirm that the proposed method is more robust than existing procedures. A real data application to detect the association between gene TRAF1-C5 and rheumatoid arthritis further shows good performance of the proposed procedure. PMID:27189724
Diffusive behavior of a greedy traveling salesman
NASA Astrophysics Data System (ADS)
Lipowski, Adam; Lipowska, Dorota
2011-06-01
Using Monte Carlo simulations we examine the diffusive properties of the greedy algorithm in the d-dimensional traveling salesman problem. Our results show that for d=3 and 4 the average squared distance from the origin
A greedy-navigator approach to navigable city plans
NASA Astrophysics Data System (ADS)
Lee, Sang Hoon; Holme, Petter
2013-01-01
We use a set of four theoretical navigability indices for street maps to investigate the shape of the resulting street networks, if they are grown by optimizing these indices. The indices compare the performance of simulated navigators (having a partial information about the surroundings, like humans in many real situations) to the performance of optimally navigating individuals. We show that our simple greedy shortcut construction strategy generates the emerging structures that are different from real road network, but not inconceivable. The resulting city plans, for all navigation indices, share common qualitative properties such as the tendency for triangular blocks to appear, while the more quantitative features, such as degree distributions and clustering, are characteristically different depending on the type of metrics and routing strategies. We show that it is the type of metrics used which determines the overall shapes characterized by structural heterogeneity, but the routing schemes contribute to more subtle details of locality, which is more emphasized in case of unrestricted connections when the edge crossing is allowed.
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint
NASA Astrophysics Data System (ADS)
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-01
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-21
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
On the origin of multi-step spin transition behaviour in 1D nanoparticles
NASA Astrophysics Data System (ADS)
Chiruta, Daniel; Jureschi, Catalin-Maricel; Linares, Jorge; Dahoo, Pierre Richard; Garcia, Yann; Rotaru, Aurelian
2015-09-01
To investigate the spin state switching mechanism in spin crossover (SCO) nanoparticles, a special attention is given to three-step thermally induced SCO behavior in 1D chains. An additional term is included in the standard Ising-like Hamiltonian to account for the border interaction between SCO molecules and its local environment. It is shown that this additional interaction, together with the short range interaction, drives the multi-steps thermal hysteretic behavior in 1D SCO systems. The relation between a polymeric matrix and this particular multi-step SCO phenomenon is discussed accordingly. Finally, the environmental influence on the SCO system's size is analyzed as well.
Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2006-01-17
The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.
Chun, Sung-woo; Kim, Daehong; Kwon, Jihun; Kim, Bongho; Choi, Seonjun; Lee, Seung-Beck
2012-04-01
We have demonstrated the fabrication of sub 30 nm magnetic tunnel junctions (MTJs) with perpendicular magnetic anisotropy. The multi-step ion beam etching (IBE) process performed for 18 min between 45 deg. and 30 deg. , at 500 V combined ion supply voltage, resulted in a 55 nm tall MTJ with 28 nm diameter. We used a negative tone electron beam resist as the hard mask, which maintained its lateral dimension during the IBE, allowing almost vertical pillar side profiles. The measurement results showed a tunnel magneto-resistance ratio of 13% at 1 k{Omega} junction resistance. With further optimization in IBE energy and multi-step etching process, it will be possible to fabricate perpendicularly oriented MTJs for future sub 30 nm non-volatile magnetic memory applications.
Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback
NASA Astrophysics Data System (ADS)
Zhang, Wenle; Liu, Jianchang
2016-04-01
This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.
Mechanical and Metallurgical Evolution of Stainless Steel 321 in a Multi-step Forming Process
NASA Astrophysics Data System (ADS)
Anderson, M.; Bridier, F.; Gholipour, J.; Jahazi, M.; Wanjara, P.; Bocher, P.; Savoie, J.
2016-04-01
This paper examines the metallurgical evolution of AISI Stainless Steel 321 (SS 321) during multi-step forming, a process that involves cycles of deformation with intermediate heat treatment steps. The multi-step forming process was simulated by implementing interrupted uniaxial tensile testing experiments. Evolution of the mechanical properties as well as the microstructural features, such as twins and textures of the austenite and martensite phases, was studied as a function of the multi-step forming process. The characteristics of the Strain-Induced Martensite (SIM) were also documented for each deformation step and intermediate stress relief heat treatment. The results indicated that the intermediate heat treatments considerably increased the formability of SS 321. Texture analysis showed that the effect of the intermediate heat treatment on the austenite was minor and led to partial recrystallization, while deformation was observed to reinforce the crystallographic texture of austenite. For the SIM, an Olson-Cohen equation type was identified to analytically predict its formation during the multi-step forming process. The generated SIM was textured and weakened with increasing deformation.
Multi-step routes of capuchin monkeys in a laser pointer traveling salesman task.
Howard, Allison M; Fragaszy, Dorothy M
2014-09-01
Prior studies have claimed that nonhuman primates plan their routes multiple steps in advance. However, a recent reexamination of multi-step route planning in nonhuman primates indicated that there is no evidence for planning more than one step ahead. We tested multi-step route planning in capuchin monkeys using a pointing device to "travel" to distal targets while stationary. This device enabled us to determine whether capuchins distinguish the spatial relationship between goals and themselves and spatial relationships between goals and the laser dot, allocentrically. In Experiment 1, two subjects were presented with identical food items in Near-Far (one item nearer to subject) and Equidistant (both items equidistant from subject) conditions with a laser dot visible between the items. Subjects moved the laser dot to the items using a joystick. In the Near-Far condition, one subject demonstrated a bias for items closest to self but the other subject chose efficiently. In the second experiment, subjects retrieved three food items in similar Near-Far and Equidistant arrangements. Both subjects preferred food items nearest the laser dot and showed no evidence of multi-step route planning. We conclude that these capuchins do not make choices on the basis of multi-step look ahead strategies. PMID:24700520
Use of Chiral Oxazolidinones for a Multi-Step Synthetic Laboratory Module
ERIC Educational Resources Information Center
Betush, Matthew P.; Murphree, S. Shaun
2009-01-01
Chiral oxazolidinone chemistry is used as a framework for an advanced multi-step synthesis lab. The cost-effective and robust preparation of chiral starting materials is presented, as well as the use of chiral auxiliaries in a synthesis scheme that is appropriate for students currently in the second semester of the organic sequence. (Contains 1…
NASA Astrophysics Data System (ADS)
Mitran, T. L.; Melchert, O.; Hartmann, A. K.
2013-12-01
The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are “greedy” in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of “energy” for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρc, resulting from numerically exact and “static” simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems.
Teaching multi-step math skills to adults with disabilities via video prompting.
Kellems, Ryan O; Frandsen, Kaitlyn; Hansen, Blake; Gabrielsen, Terisa; Clarke, Brynn; Simons, Kalee; Clements, Kyle
2016-11-01
The purpose of this study was to evaluate the effectiveness of teaching multi-step math skills to nine adults with disabilities in an 18-21 post-high school transition program using a video prompting intervention package. The dependent variable was the percentage of steps completed correctly. The independent variable was the video prompting intervention, which involved several multi-step math calculation skills: (a) calculating a tip (15%), (b) calculating item unit prices, and (c) adjusting a recipe for more or fewer people. Results indicated a functional relationship between the video prompting interventions and prompting package and the percentage of steps completed correctly. 8 out of the 9 adults showed significant gains immediately after receiving the video prompting intervention. PMID:27589151
A nonparametric method of multi-step ahead forecasting in diffusion processes
NASA Astrophysics Data System (ADS)
Yamamura, Mariko; Shoji, Isao
2010-06-01
This paper provides a nonparametric model of multi-step ahead forecasting in diffusion processes. The model is constructed from the local linear model with the Gaussian kernel. The paper provides simulation studies to evaluate its performance of multi-step ahead forecasting by comparing with the global linear model, showing the better forecasting performance of the nonparametric model than the global linear model. The paper also conducts empirical analysis for forecasting using intraday data of the Japanese stock price index and the time series of heart rates. The result shows the performance of forecasting does not differ much in the Japanese stock price index, but that the nonparametric model shows significantly better performance in the analysis of the heart rates.
Region-based multi-step optic disk and cup segmentation from color fundus image
NASA Astrophysics Data System (ADS)
Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan
2013-02-01
Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.
Contaminant source and release history identification in groundwater: a multi-step approach.
Gzyl, G; Zanini, A; Frączek, R; Kura, K
2014-02-01
The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study. PMID:24365394
Contaminant source and release history identification in groundwater: A multi-step approach
NASA Astrophysics Data System (ADS)
Gzyl, G.; Zanini, A.; Frączek, R.; Kura, K.
2014-02-01
The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study.
Laboratory investigation of borehole breakouts and Multi-step failure model
NASA Astrophysics Data System (ADS)
Ruan, Xiao-Ping; Mao, Ji-Zheng; Cui, Zhan-Tao
1993-05-01
Based on our experiment of borehole breakouts with a group of sandstone samples described in this paper, a multi-step failure model of borehole breakouts are proposed to quantitatively explain the relationship between the section shape of borehole breakouts and the state of crustal stress. In this model the borehole spalling is not only related to the state of stress at a single point but also the state of stress on its neighboring area. The comparison between the experimental results of borehole breakouts and the calculation results shows a good agreement.
Multi-step motion planning: Application to free-climbing robots
NASA Astrophysics Data System (ADS)
Bretl, Timothy Wolfe
This dissertation addresses the problem of planning the motion of a multi-limbed robot to "free-climb" vertical rock surfaces. Free-climbing relies on natural features and friction (such as holes or protrusions) rather than special fixtures or tools. It requires strength, but more importantly it requires deliberate reasoning: not only must the robot decide how to adjust its posture to reach the next feature without falling, it must plan an entire sequence of steps, where each one might have future consequences. This process of reasoning is called multi-step planning. A multi-step planning framework is presented for computing non-gaited, free-climbing motions. This framework derives from an analysis of a free-climbing robot's configuration space, which can be decomposed into constraint manifolds associated with each state of contact between the robot and its environment. An understanding of the adjacency between manifolds motivates a two-stage strategy that uses a candidate sequence of steps to direct the subsequent search for motions. Three algorithms are developed to support the framework. The first algorithm reduces the amount of time required to plan each potential step, a large number of which must be considered over an entire multi-step search. It extends the probabilistic roadmap (PRM) approach based on an analysis of the interaction between balance and the topology of closed kinematic chains. The second algorithm addresses a problem with the PRM approach, that it is unable to distinguish challenging steps (which may be critical) from impossible ones. This algorithm detects impossible steps explicitly, using automated algebraic inference and machine learning. The third algorithm provides a fast constraint checker (on which the PRM approach depends), in particular a test of balance at the initially unknown number of sampled configurations associated with each step. It is a method of incremental precomputation, fast because it takes advantage of the sample
Lin, Shih-Wei; Ying, Kuo-Ching; Wan, Shu-Yen
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295
2014-01-01
Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295
Quasi-greedy triangulations approximating the minimum weight triangulation
Levcopoulos, C.; Krznaric, D.
1996-12-31
This paper settles the following two open problems: (1) What is the worst-case approximation ratio between the greedy and the minimum weight triangulation? (2) Is there a polynomial time algorithm that always pro- duces a triangulation whose length is within a constant factor from the minimum? The answer to the first question is that the known {Omega}({radical}n) lower bound is tight. The second question is answered in the affirmative by using a slight modification of an O(n log n) algorithm for the greedy triangulation. We also derive some other interesting results. For example, we show that a constant-factor approximation of the minimum weight convex partition can be obtained within the same time bounds.
Multi-step loading/unloading experiments that challenge constitutive models of glassy polymers
NASA Astrophysics Data System (ADS)
Caruthers, James; Medvedev, Grigori
2014-03-01
The mechanical response of glassy polymers depends on the thermal and deformational history, where the resulting relaxation phenomenon remains a significant challenge for constitutive modeling. For strain controlled experiments the stress response is measured during loading/unloading ramps and a constant strain. By judiciously combining the basic steps, a set of multi-step experiments have been designed to challenge existing constitutive models for glassy polymers. A particular example is the ``stress memory'' experiment, i.e. loading through yield, unloading to zero stress, and holding at final strain, where the subsequent evolution of the stress exhibits an overshoot. The observed dependence of the overshoot on the loading strain rate cannot be explained by the models where the relaxation time is a function of stress or strain. Another discriminating multi-step history experiment involves strain accumulation to test the common assumption that the phenomenon of strain hardening is caused by a purely elastic contribution to stress. Experimental results will be presented for a low Tg epoxy system, and the data will be used to critically analyze the predictions of both traditional viscoelastic/viscoplastic constitutive models and a recently developed Stochastic Constitutive Model.
Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul
2013-01-01
Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876
A new indirect multi-step-ahead prediction model for a long-term hydrologic prediction
NASA Astrophysics Data System (ADS)
Cheng, Chun-Tian; Xie, Jing-Xin; Chau, Kwok-Wing; Layeghifard, Mehdi
2008-10-01
SummaryA dependable long-term hydrologic prediction is essential to planning, designing and management activities of water resources. A three-stage indirect multi-step-ahead prediction model, which combines dynamic spline interpolation into multilayer adaptive time-delay neural network (ATNN), is proposed in this study for the long term hydrologic prediction. In the first two stages, a group of spline interpolation and dynamic extraction units are utilized to amplify the effect of observations in order to decrease the errors accumulation and propagation caused by the previous prediction. In the last step, variable time delays and weights are dynamically regulated by ATNN and the output of ATNN can be obtained as a multi-step-ahead prediction. We use two examples to illustrate the effectiveness of the proposed model. One example is the sunspots time series that is a well-known nonlinear and non-Gaussian benchmark time series and is often used to evaluate the effectiveness of nonlinear models. Another example is a case study of a long-term hydrologic prediction which uses the monthly discharges data from the Manwan Hydropower Plant in Yunnan Province of China. Application results show that the proposed method is feasible and effective.
Iterated greedy graph coloring and the coloring landscape
Culberson, J.
1994-12-31
The Iterated Greedy (IG) graph coloring algorithm uses the greedy, or simple sequential, graph coloring algorithm repeatedly to obtain ever better colorings. On each iteration, the permutation presented to the greedy algorithm is generated so that the vertices of the independent sets identified in the previous coloring are adjacent in the permutation. It is trivial to prove that this ensures that the new coloring will use no more colors than the previous coloring. On random graphs the algorithm does not perform as well as TABU or semi-exhaustive independent set approaches. It does offer some improvements when combined with these. On k-colorable graphs it seems quite effective, and offers a robustness over a wide range of k, n, p values the other algorithms seem not to have. In particular, evidence indicates that one setting of parameters seems to be {open_quotes}near best{close_quotes} over most of these classes. Evidence also indicates that graphs in the classes we consider that are harder for this algorithm are also more difficult for TABU and semi-exhaustive independent set approaches. Thus, the number of iterations required gives a natural measure of difficulty of the graphs, independent of machine characteristics and many details of implementation.
SMG: Fast scalable greedy algorithm for influence maximization in social networks
NASA Astrophysics Data System (ADS)
Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham
2015-02-01
Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.
Variation of nanopore diameter along porous anodic alumina channels by multi-step anodization.
Lee, Kwang Hong; Lim, Xin Yuan; Wai, Kah Wing; Romanato, Filippo; Wong, Chee Cheong
2011-02-01
In order to form tapered nanocapillaries, we investigated a method to vary the nanopore diameter along the porous anodic alumina (PAA) channels using multi-step anodization. By anodizing the aluminum in either single acid (H3PO4) or multi-acid (H2SO4, oxalic acid and H3PO4) with increasing or decreasing voltage, the diameter of the nanopore along the PAA channel can be varied systematically corresponding to the applied voltages. The pore size along the channel can be enlarged or shrunken in the range of 20 nm to 200 nm. Structural engineering of the template along the film growth direction can be achieved by deliberately designing a suitable voltage and electrolyte together with anodization time. PMID:21456152
Digital multi-step phase-shifting profilometry for three-dimensional ballscrew surface imaging
NASA Astrophysics Data System (ADS)
Liu, Cheng-Yang; Yen, Tzu-Ping
2016-05-01
A digital multi-step phase-shifting profilometry for three-dimensional (3-D) ballscrew surface imaging is presented. The 3-D digital imaging system is capable of capturing fringe pattern images. The straight fringe patterns generated by software in the computer are projected onto the ballscrew surface by the DLP projector. The distorted fringe patterns are captured by the CCD camera at different detecting directions for reconstruction algorithms. The seven-step phase-shifting algorithm and quality guided path unwrapping algorithm are used to calculate absolute phase at each pixel position. The 3-D calibration method is used to obtain the relationship between the absolute phase map and ballscrew shape. The angular dependence of 3-D shape imaging for ballscrews is analyzed and characterized. The experimental results may provide a novel, fast, and high accuracy imaging system to inspect the surface features of the ballscrew without length limitation for automated optical inspection industry.
The solution of Parrondo’s games with multi-step jumps
NASA Astrophysics Data System (ADS)
Saakian, David B.
2016-04-01
We consider the general case of Parrondo’s games, when there is a finite probability to stay in the current state as well as multi-step jumps. We introduce a modification of the model: the transition probabilities between different games depend on the choice of the game in the previous round. We calculate the rate of capital growth as well as the variance of the distribution, following large deviation theory. The modified model allows higher capital growth rates than in standard Parrondo games for the range of parameters considered in the key articles about these games, and positive capital growth is possible for a much wider regime of parameters of the model.
A Multi-Step Assessment Scheme for Seismic Network Site Selection in Densely Populated Areas
NASA Astrophysics Data System (ADS)
Plenkers, Katrin; Husen, Stephan; Kraft, Toni
2015-10-01
We developed a multi-step assessment scheme for improved site selection during seismic network installation in densely populated areas. Site selection is a complex process where different aspects (seismic background noise, geology, and financing) have to be taken into account. In order to improve this process, we developed a step-wise approach that allows quantifying the quality of a site by using, in addition to expert judgement and test measurements, two weighting functions as well as reference stations. Our approach ensures that the recording quality aimed for is reached and makes different sites quantitatively comparable to each other. Last but not least, it is an easy way to document the decision process, because all relevant parameters are listed, quantified, and weighted.
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better
Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach
2014-01-01
Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population
Multi-step regionalization technique and regional model validation for climate studies
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Calandria-Hernández, D.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2010-09-01
A regionalization procedure is proposed to define affinity regions in Andalusia (Southern Spain) regarding maximum and minimum temperature, and precipitation in order to validate a regional climate model (WRF). In situ observations are not suitable for model validation unless they are somehow upscaled. Therefore, a regionalization methodology was adopted to overcome the representation error that arises from the spatial scale disagreement between site-specific observations and model outputs. An observational daily dataset that comprises 412 rain gauges and 120 maximum and minimum temperature series all over Andalusia was used. The observations covered a 10-year period ranging from 1990 to 1999 with no more than 10% of missing values. The original dataset composed by 716 series for precipitation and 243 for temperature were employed to fill the gaps using a correlation method. Precipitation and temperature have been processed separately using the multi-step regionalization methodology formed by three main stages. Firstly, a S-Mode Principal Component Analysis (PCA) was applied to the correlation matrix obtained from daily values to retain principal modes of variability and discard possible information redundancy. Secondly, rotated normalized loadings were used to classify the stations via an agglomerative Clustering Analysis (CA) method to set the number of regions and the centroids associated to those regions. Finally, using the centroids calculated in the previous step and once the appropriate number of regions was identified, a non-hierarchical k-means algorithm was applied to obtain the definitive climate division of Andalusia. The combination of methods attempts to take advantage of their benefits and eliminate their shortcomings when used individually. This multi-step methodology achieves a noticeable reduction of subjectivity in the regionalization process. Furthermore, it is a methodology only based on the data analyzed to perform the regionalization with no
Greedy adaptive walks on a correlated fitness landscape.
Park, Su-Chan; Neidhart, Johannes; Krug, Joachim
2016-05-21
We study adaptation of a haploid asexual population on a fitness landscape defined over binary genotype sequences of length L. We consider greedy adaptive walks in which the population moves to the fittest among all single mutant neighbors of the current genotype until a local fitness maximum is reached. The landscape is of the rough mount Fuji type, which means that the fitness value assigned to a sequence is the sum of a random and a deterministic component. The random components are independent and identically distributed random variables, and the deterministic component varies linearly with the distance to a reference sequence. The deterministic fitness gradient c is a parameter that interpolates between the limits of an uncorrelated random landscape (c=0) and an effectively additive landscape (c→∞). When the random fitness component is chosen from the Gumbel distribution, explicit expressions for the distribution of the number of steps taken by the greedy walk are obtained, and it is shown that the walk length varies non-monotonically with the strength of the fitness gradient when the starting point is sufficiently close to the reference sequence. Asymptotic results for general distributions of the random fitness component are obtained using extreme value theory, and it is found that the walk length attains a non-trivial limit for L→∞, different from its values for c=0 and c=∞, if c is scaled with L in an appropriate combination. PMID:26953649
Multi-step deformations - a stringent test for constitutive models for polymer glasses
NASA Astrophysics Data System (ADS)
Medvedev, Grigori; Caruthers, James
A number of constitutive models have been proposed to describe mechanical behavior of polymer glasses, where the focus has been on the stress-strain curve observed in a constant strain rate deformation. The stress-strain curve possesses several prominent features, including yield, post-yield softening, flow, and hardening, which have proven challenging to predict. As a result, both viscoplastic and nonlinear viscoelastic constitutive models have become quite intricate, where a new mechanism is invoked for each bend of the stress-strain curve. We demonstrate on several examples that when the models are used to describe the multi-step deformations vs. the more common single strain rate deformation, they produce responses that are qualitatively incorrect, revealing the existing models to be parameterizations of a single-step curve. A recently developed stochastic constitutive model has fewer problems than the traditional viscoelastic/viscoplastic models, but it also has difficulties. The implications for the mechanics and physics of glassy polymers will be discussed.
Reinforced recurrent neural networks for multi-step-ahead flood forecasts
NASA Astrophysics Data System (ADS)
Chen, Pin-An; Chang, Li-Chiu; Chang, Fi-John
2013-08-01
Considering true values cannot be available at every time step in an online learning algorithm for multi-step-ahead (MSA) forecasts, a MSA reinforced real-time recurrent learning algorithm for recurrent neural networks (R-RTRL NN) is proposed. The main merit of the proposed method is to repeatedly adjust model parameters with the current information including the latest observed values and model's outputs to enhance the reliability and the forecast accuracy of the proposed method. The sequential formulation of the R-RTRL NN is derived. To demonstrate its reliability and effectiveness, the proposed R-RTRL NN is implemented to make 2-, 4- and 6-step-ahead forecasts in a famous benchmark chaotic time series and a reservoir flood inflow series in North Taiwan. For comparison purpose, three comparative neural networks (two dynamic and one static neural networks) were performed. Numerical and experimental results indicate that the R-RTRL NN not only achieves superior performance to comparative networks but significantly improves the precision of MSA forecasts for both chaotic time series and reservoir inflow case during typhoon events with effective mitigation in the time-lag problem.
Exact free vibration of multi-step Timoshenko beam system with several attachments
NASA Astrophysics Data System (ADS)
Farghaly, S. H.; El-Sayed, T. A.
2016-05-01
This paper deals with the analysis of the natural frequencies, mode shapes of an axially loaded multi-step Timoshenko beam combined system carrying several attachments. The influence of system design and the proposed sub-system non-dimensional parameters on the combined system characteristics are the major part of this investigation. The effect of material properties, rotary inertia and shear deformation of the beam system for each span are included. The end masses are elastically supported against rotation and translation at an offset point from the point of attachment. A sub-system having two degrees of freedom is located at the beam ends and at any of the intermediate stations and acts as a support and/or a suspension. The boundary conditions of the ordinary differential equation governing the lateral deflections and slope due to bending of the beam system including the shear force term, due to the sub-system, have been formulated. Exact global coefficient matrices for the combined modal frequencies, the modal shape and for the discrete sub-system have been derived. Based on these formulae, detailed parametric studies of the combined system are carried out. The applied mathematical model is valid for wide range of applications especially in mechanical, naval and structural engineering fields.
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2015-01-01
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, J.L.
1990-07-10
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.
Multi-step process for concentrating magnetic particles in waste sludges
Watson, John L.
1990-01-01
This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.
Detection of Heterogeneous Small Inclusions by a Multi-Step MUSIC Method
NASA Astrophysics Data System (ADS)
Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni
2014-05-01
In this contribution the problem of detecting and localizing scatterers with small (in terms of wavelength) cross sections by collecting their scattered field is addressed. The problem is dealt with for a two-dimensional and scalar configuration where the background is given as a two-layered cylindrical medium. More in detail, while scattered field data are taken in the outermost layer, inclusions are embedded within the inner layer. Moreover, the case of heterogeneous inclusions (i.e., having different scattering coefficients) is addressed. As a pertinent applicative context we identify the problem of diagnose concrete pillars in order to detect and locate rebars, ducts and other small in-homogeneities that can populate the interior of the pillar. The nature of inclusions influences the scattering coefficients. For example, the field scattered by rebars is stronger than the one due to ducts. Accordingly, it is expected that the more weakly scattering inclusions can be difficult to be detected as their scattered fields tend to be overwhelmed by those of strong scatterers. In order to circumvent this problem, in this contribution a multi-step MUltiple SIgnal Classification (MUSIC) detection algorithm is adopted [1]. In particular, the first stage aims at detecting rebars. Once rebars have been detected, their positions are exploited to update the Green's function and to subtract the scattered field due to their presence. The procedure is repeated until all the inclusions are detected. The analysis is conducted by numerical experiments for a multi-view/multi-static single-frequency configuration and the synthetic data are generated by a FDTD forward solver. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] R. Solimene, A. Dell'Aversano and G. Leone, "MUSIC algorithms for rebar detection," J. of Geophysics and Engineering, vol. 10, pp. 1
A Greedy Double Auction Mechanism for Grid Resource Allocation
NASA Astrophysics Data System (ADS)
Ding, Ding; Luo, Siwei; Gao, Zhan
To improve the resource utilization and satisfy more users, a Greedy Double Auction Mechanism(GDAM) is proposed to allocate resources in grid environments. GDAM trades resources at discriminatory price instead of uniform price, reflecting the variance in requirements for profits and quantities. Moreover, GDAM applies different auction rules to different cases, over-demand, over-supply and equilibrium of demand and supply. As a new mechanism for grid resource allocation, GDAM is proved to be strategy-proof, economically efficient, weakly budget-balanced and individual rational. Simulation results also confirm that GDAM outperforms the traditional one on both the total trade amount and the user satisfaction percentage, specially as more users are involved in the auction market.
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
The functional specifications, functional design and flow, and the program logic of the GREEDY computer program are described. The GREEDY program is a submodule of the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) program and has been designed as a continuation of the shuttle Mission Payloads (MPLS) program. The MPLS uses input payload data to form a set of feasible payload combinations; from these, GREEDY selects a subset of combinations (a traffic model) so all payloads can be included without redundancy. The program also provides the user a tutorial option so that he can choose an alternate traffic model in case a particular traffic model is unacceptable.
Method to Improve Indium Bump Bonding via Indium Oxide Removal Using a Multi-Step Plasma Process
NASA Technical Reports Server (NTRS)
Greer, H. Frank (Inventor); Jones, Todd J. (Inventor); Vasquez, Richard P. (Inventor); Hoenk, Michael E. (Inventor); Dickie, Matthew R. (Inventor); Nikzad, Shouleh (Inventor)
2012-01-01
A process for removing indium oxide from indium bumps in a flip-chip structure to reduce contact resistance, by a multi-step plasma treatment. A first plasma treatment of the indium bumps with an argon, methane and hydrogen plasma reduces indium oxide, and a second plasma treatment with an argon and hydrogen plasma removes residual organics. The multi-step plasma process for removing indium oxide from the indium bumps is more effective in reducing the oxide, and yet does not require the use of halogens, does not change the bump morphology, does not attack the bond pad material or under-bump metallization layers, and creates no new mechanisms for open circuits.
Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew
2005-05-03
A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.
Automated multi-step purification protocol for Angiotensin-I-Converting-Enzyme (ACE).
Eisele, Thomas; Stressler, Timo; Kranz, Bertolt; Fischer, Lutz
2012-12-12
Highly purified proteins are essential for the investigation of the functional and biochemical properties of proteins. The purification of a protein requires several steps, which are often time-consuming. In our study, the Angiotensin-I-Converting-Enzyme (ACE; EC 3.4.15.1) was solubilised from pig lung without additional detergents, which are commonly used, under mild alkaline conditions in a Tris-HCl buffer (50mM, pH 9.0) for 48h. An automation of the ACE purification was performed using a multi-step protocol in less than 8h, resulting in a purified protein with a specific activity of 37Umg(-1) (purification factor 308) and a yield of 23.6%. The automated ACE purification used an ordinary fast-protein-liquid-chromatography (FPLC) system equipped with two additional switching valves. These switching valves were needed for the buffer stream inversion and for the connection of the Superloop™ used for the protein parking. Automated ACE purification was performed using four combined chromatography steps, including two desalting procedures. The purification methods contained two hydrophobic interaction chromatography steps, a Cibacron 3FG-A chromatography step and a strong anion exchange chromatography step. The purified ACE was characterised by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and native-PAGE. The estimated monomer size of the purified glycosylated ACE was determined to be ∼175kDa by SDS-PAGE, with the dimeric form at ∼330kDa as characterised by a native PAGE using a novel activity staining protocol. For the activity staining, the tripeptide l-Phe-Gly-Gly was used as the substrate. The ACE cleaved the dipeptide Gly-Gly, releasing the l-Phe to be oxidised with l-amino acid oxidase. Combined with peroxidase and o-dianisidine, the generated H(2)O(2) stained a brown coloured band. This automated purification protocol can be easily adapted to be used with other protein purification tasks. PMID:23217308
A controlled greedy supervised approach for co-reference resolution on clinical text.
Chowdhury, Md Faisal Mahbub; Zweigenbaum, Pierre
2013-06-01
Identification of co-referent entity mentions inside text has significant importance for other natural language processing (NLP) tasks (e.g. event linking). However, this task, known as co-reference resolution, remains a complex problem, partly because of the confusion over different evaluation metrics and partly because the well-researched existing methodologies do not perform well on new domains such as clinical records. This paper presents a variant of the influential mention-pair model for co-reference resolution. Using a series of linguistically and semantically motivated constraints, the proposed approach controls generation of less-informative/sub-optimal training and test instances. Additionally, the approach also introduces some aggressive greedy strategies in chain clustering. The proposed approach has been tested on the official test corpus of the recently held i2b2/VA 2011 challenge. It achieves an unweighted average F1 score of 0.895, calculated from multiple evaluation metrics (MUC, B(3) and CEAF scores). These results are comparable to the best systems of the challenge. What makes our proposed system distinct is that it also achieves high average F1 scores for each individual chain type (Test: 0.897, Person: 0.852, PROBLEM: 0.855, TREATMENT: 0.884). Unlike other works, it obtains good scores for each of the individual metrics rather than being biased towards a particular metric. PMID:23562650
Halim, Amanatuzzakiah Abdul; Szita, Nicolas; Baganz, Frank
2013-12-01
The concept of de novo metabolic engineering through novel synthetic pathways offers new directions for multi-step enzymatic synthesis of complex molecules. This has been complemented by recent progress in performing enzymatic reactions using immobilized enzyme microreactors (IEMR). This work is concerned with the construction of de novo designed enzyme pathways in a microreactor synthesizing chiral molecules. An interesting compound, commonly used as the building block in several pharmaceutical syntheses, is a single diastereoisomer of 2-amino-1,3,4-butanetriol (ABT). This chiral amino alcohol can be synthesized from simple achiral substrates using two enzymes, transketolase (TK) and transaminase (TAm). Here we describe the development of an IEMR using His6-tagged TK and TAm immobilized onto Ni-NTA agarose beads and packed into tubes to enable multi-step enzyme reactions. The kinetic parameters of both enzymes were first determined using single IEMRs evaluated by a kinetic model developed for packed bed reactors. The Km(app) for both enzymes appeared to be flow rate dependent, while the turnover number kcat was reduced 3 fold compared to solution-phase TK and TAm reactions. For the multi-step enzyme reaction, single IEMRs were cascaded in series, whereby the first enzyme, TK, catalyzed a model reaction of lithium-hydroxypyruvate (HPA) and glycolaldehyde (GA) to L-erythrulose (ERY), and the second unit of the IEMR with immobilized TAm converted ERY into ABT using (S)-α-methylbenzylamine (MBA) as amine donor. With initial 60mM (HPA and GA each) and 6mM (MBA) substrate concentration mixture, the coupled reaction reached approximately 83% conversion in 20 min at the lowest flow rate. The ability to synthesize a chiral pharmaceutical intermediate, ABT in relatively short time proves this IEMR system as a powerful tool for construction and evaluation of de novo pathways as well as for determination of enzyme kinetics. PMID:24055435
Multi-Step Ka/Ka Dichroic Plate with Rounded Corners for NASA's 34m Beam Waveguide Antenna
NASA Technical Reports Server (NTRS)
Veruttipong, Watt; Khayatian, Behrouz; Hoppe, Daniel; Long, Ezra
2013-01-01
A multi-step Ka/Ka dichroic plate Frequency Selective Surface (FSS structure) is designed, manufactured and tested for use in NASA's Deep Space Network (DSN) 34m Beam Waveguide (BWG) antennas. The proposed design allows ease of manufacturing and ability to handle the increased transmit power (reflected off the FSS) of the DSN BWG antennas from 20kW to 100 kW. The dichroic is designed using HFSS and results agree well with measured data considering the manufacturing tolerances that could be achieved on the dichroic.
GreedyMAX-type Algorithms for the Maximum Independent Set Problem
NASA Astrophysics Data System (ADS)
Borowiecki, Piotr; Göring, Frank
A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.
Lautenschlager, Karin; Hwang, Chiachi; Ling, Fangqiong; Liu, Wen-Tso; Boon, Nico; Köster, Oliver; Egli, Thomas; Hammes, Frederik
2014-10-01
Indigenous bacterial communities are essential for biofiltration processes in drinking water treatment systems. In this study, we examined the microbial community composition and abundance of three different biofilter types (rapid sand, granular activated carbon, and slow sand filters) and their respective effluents in a full-scale, multi-step treatment plant (Zürich, CH). Detailed analysis of organic carbon degradation underpinned biodegradation as the primary function of the biofilter biomass. The biomass was present in concentrations ranging between 2-5 × 10(15) cells/m(3) in all filters but was phylogenetically, enzymatically and metabolically diverse. Based on 16S rRNA gene-based 454 pyrosequencing analysis for microbial community composition, similar microbial taxa (predominantly Proteobacteria, Planctomycetes, Acidobacteria, Bacteriodetes, Nitrospira and Chloroflexi) were present in all biofilters and in their respective effluents, but the ratio of microbial taxa was different in each filter type. This change was also reflected in the cluster analysis, which revealed a change of 50-60% in microbial community composition between the different filter types. This study documents the direct influence of the filter biomass on the microbial community composition of the final drinking water, particularly when the water is distributed without post-disinfection. The results provide new insights on the complexity of indigenous bacteria colonizing drinking water systems, especially in different biofilters of a multi-step treatment plant. PMID:24937356
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Tsai, Meng-Jung
2016-04-01
Accurate multi-step-ahead inflow forecasting during typhoon periods is extremely crucial for real-time reservoir flood control. We propose a spatio-temporal lumping of radar rainfall for modeling inflow forecasts to mitigate time-lag problems and improve forecasting accuracy. Spatial aggregation of radar cells is made based on the sub-catchment partitioning obtained from the Self-Organizing Map (SOM), and then flood forecasting is made by the Adaptive Neuro Fuzzy Inference System (ANFIS) models coupled with a 2-staged Gamma Test (2-GT) procedure that identifies the optimal non-trivial rainfall inputs. The Shihmen Reservoir in northern Taiwan is used as a case study. The results show that the proposed methods can, in general, precisely make 1- to 4-hour-ahead forecasts and the lag time between predicted and observed flood peaks could be mitigated. The constructed ANFIS models with only two fuzzy if-then rules can effectively categorize inputs into two levels (i.e. high and low) and provide an insightful view (perspective) of the rainfall-runoff process, which demonstrate their capability in modeling the complex rainfall-runoff process. In addition, the confidence level of forecasts with acceptable error can reach as high as 97% at horizon t+1 and 77% at horizon t+4, respectively, which evidently promotes model reliability and leads to better decisions on real-time reservoir operation during typhoon events.
Convex dynamics: Unavoidable difficulties in bounding some greedy algorithms
NASA Astrophysics Data System (ADS)
Nowicki, Tomasz; Tresser, Charles
2004-03-01
A greedy algorithm for scheduling and digital printing with inputs in a convex polytope, and vertices of this polytope as successive outputs, has recently been proven to be bounded for any convex polytope in any dimension. This boundedness property follows readily from the existence of some invariant region for a dynamical system equivalent to the algorithm, which is what one proves. While the proof, and some constructions of invariant regions that can be made to depend on a single parameter, are reasonably simple for convex polygons in the plane, the proof of boundedness gets quite complicated in dimension three and above. We show here that such complexity is somehow justified by proving that the most natural generalization of the construction that works for polygons does not work in any dimension above two, even if we allow for as many parameters as there are faces. We first prove that some polytopes in dimension greater than two admit no invariant region to which they are combinatorially equivalent. We then modify these examples to get polytopes such that no invariant region can be obtained by pushing out the borders of the half spaces that intersect to form the polytope. We also show that another mechanism prevents some simplices (the simplest polytopes in any dimension) from admitting invariant regions to which they would be similar. By contrast in dimension two, one can always get an invariant region by pushing these borders far enough in some correlated way; for instance, pushing all borders by the same distance builds an invariant region for any polygon if the push is at a distance big enough for that polygon. To motivate the examples that we provide, we discuss briefly the bifurcations of polyhedra associated with pushing half spaces in parallel to themselves. In dimension three, the elementary codimension one bifurcation resembles the unfolding of the elementary degenerate singularity for codimension one foliations on surfaces. As the subject of this
Convex dynamics: unavoidable difficulties in bounding some greedy algorithms.
Nowicki, Tomasz; Tresser, Charles
2004-03-01
A greedy algorithm for scheduling and digital printing with inputs in a convex polytope, and vertices of this polytope as successive outputs, has recently been proven to be bounded for any convex polytope in any dimension. This boundedness property follows readily from the existence of some invariant region for a dynamical system equivalent to the algorithm, which is what one proves. While the proof, and some constructions of invariant regions that can be made to depend on a single parameter, are reasonably simple for convex polygons in the plane, the proof of boundedness gets quite complicated in dimension three and above. We show here that such complexity is somehow justified by proving that the most natural generalization of the construction that works for polygons does not work in any dimension above two, even if we allow for as many parameters as there are faces. We first prove that some polytopes in dimension greater than two admit no invariant region to which they are combinatorially equivalent. We then modify these examples to get polytopes such that no invariant region can be obtained by pushing out the borders of the half spaces that intersect to form the polytope. We also show that another mechanism prevents some simplices (the simplest polytopes in any dimension) from admitting invariant regions to which they would be similar. By contrast in dimension two, one can always get an invariant region by pushing these borders far enough in some correlated way; for instance, pushing all borders by the same distance builds an invariant region for any polygon if the push is at a distance big enough for that polygon. To motivate the examples that we provide, we discuss briefly the bifurcations of polyhedra associated with pushing half spaces in parallel to themselves. In dimension three, the elementary codimension one bifurcation resembles the unfolding of the elementary degenerate singularity for codimension one foliations on surfaces. As the subject of this
NASA Astrophysics Data System (ADS)
Tommerup, So/ren; Endelt, Benny; Nielsen, Karl Brian
2013-12-01
This paper investigates process control possibilities obtained from a new tool concept for adaptive blank holder force (BHF) distribution. The investigation concerns the concept's application to a multi-step deep drawing process exemplified by the NUMISHEET2014 benchmark 2: Springback of draw-redraw pan. An actuator system, where several cavities are embedded into the blank holder plate is used. By independently controlling the pressure of hydraulic fluid in these cavities, a controlled deflection of the blank holder plate surface can be achieved whereby the distribution of the BHF can be controlled. Using design of experiments, a full 3-level factorial experiments is conducted with respect to the cavity pressures, and the effects and interactions are evaluated.
Dauber, Eva-Maria; Kratzer, Adelgunde; Neuhuber, Franz; Parson, Walther; Klintschar, Michael; Bär, Walter; Mayr, Wolfgang R
2012-05-01
Well defined estimates of mutation rates are a prerequisite for the use of short tandem repeat (STR-) loci in relationship testing. We investigated 65 isolated genetic inconsistencies, which were observed within 50,796 allelic transfers at 23 STR-loci (ACTBP2 (SE33), CD4, CSF1PO, F13A1, F13B, FES, FGA, vWA, TH01, TPOX, D2S1338, D3S1358, D5S818, D7S820, D8S1132, D8S1179, D12S391, D13S317, D16S539, D17S976, D18S51, D19S433, D21S11) in Caucasoid families residing in Austria and Switzerland. Sequencing data of repeat and flanking regions and the median of all theoretically possible mutational steps showed valuable information to characterise the mutational events with regard to parental origin, change of repeat number (mutational step size) and direction of mutation (losses and gains of repeats). Apart from predominant single-step mutations including one case with a double genetic inconsistency, two double-step and two apparent four-step mutations could be identified. More losses than gains of repeats and more mutations originating from the paternal than the maternal lineage were observed (31 losses, 22 gains, 12 losses or gains and 47 paternal, 11 maternal mutations and 7 unclear of parental origin). The mutation in the paternal germline was 3.3 times higher than in the maternal germline. The results of our study show, that apart from the vast majority of single-step mutations rare multi-step mutations can be observed. Therefore, the interpretation of mutational events should not rigidly be restricted to the shortest possible mutational step, because rare but true multi-step mutations can easily be overlooked, if haplotype analysis is not possible. PMID:21873136
Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens
2014-07-01
The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from
A multi-step system for screening and localization of hard exudates in retinal images
NASA Astrophysics Data System (ADS)
Bopardikar, Ajit S.; Bhola, Vishal; Raghavendra, B. S.; Narayanan, Rangavittal
2012-03-01
The number of people being affected by Diabetes mellitus worldwide is increasing at an alarming rate. Monitoring of the diabetic condition and its effects on the human body are therefore of great importance. Of particular interest is diabetic retinopathy (DR) which is a result of prolonged, unchecked diabetes and affects the visual system. DR is a leading cause of blindness throughout the world. At any point of time 25 - 44% of people with diabetes are afflicted by DR. Automation of the screening and monitoring process for DR is therefore essential for efficient utilization of healthcare resources and optimizing treatment of the affected individuals. Such automation would use retinal images and detect the presence of specific artifacts such as hard exudates, hemorrhages and soft exudates (that may appear in the image) to gauge the severity of DR. In this paper, we focus on the detection of hard exudates. We propose a two step system that consists of a screening step that classifies retinal images as normal or abnormal based on the presence of hard exudates and a detection stage that localizes these artifacts in an abnormal retinal image. The proposed screening step automatically detects the presence of hard exudates with a high sensitivity and positive predictive value (PPV ). The detection/localization step uses a k-means based clustering approach to localize hard exudates in the retinal image. Suitable feature vectors are chosen based on their ability to isolate hard exudates while minimizing false detections. The algorithm was tested on a benchmark dataset (DIARETDB1) and was seen to provide a superior performance compared to existing methods. The two-step process described in this paper can be embedded in a tele-ophthalmology system to aid with speedy detection and diagnosis of the severity of DR.
GreedEx: A Visualization Tool for Experimentation and Discovery Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. A.; Debdi, O.; Esteban-Sanchez, N.; Pizarro, C.
2013-01-01
Several years ago we presented an experimental, discovery-learning approach to the active learning of greedy algorithms. This paper presents GreedEx, a visualization tool developed to support this didactic method. The paper states the design goals of GreedEx, makes explicit the major design decisions adopted, and describes its main characteristics…
The Greedy Little Boy Teacher's Manual [With Units for Levels A and B].
ERIC Educational Resources Information Center
Otto, Dale; George, Larry
The Center for the Study of Migrant and Indian Education has recognized the need to develop special materials to improve the non-Indian's understanding of the differences he observes in his Indian classmates and to promote a better understanding by American Indian children of their unique cultural heritage. The Greedy Little Boy is a traditional…
Marcon, Magda; Keller, Daniel; Wurnig, Moritz C; Eberhardt, Christian; Weiger, Markus; Eberli, Daniel; Boss, Andreas
2016-07-01
The separation and quantification of collagen-bound water (CBW) and pore water (PW) components of the cortical bone signal are important because of their different contribution to bone mechanical properties. Ultrashort TE (UTE) imaging can be used to exploit the transverse relaxation from CBW and PW, allowing their quantification. We tested, for the first time, the feasibility of UTE measurements in mice for the separation and quantification of the transverse relaxation of CBW and PW in vivo using three different approaches for T2 * determination. UTE sequences were acquired at 4.7 T in six mice with 10 different TEs (50-5000 μs). The transverse relaxation time T2 * of CBW (T2 *cbw ) and PW (T2 *pw ) and the CBW fraction (bwf) were computed using a mono-exponential (i), a standard bi-exponential (ii) and a new multi-step bi-exponential (iii) approach. Regions of interest were drawn at multiple levels of the femur and vertebral body cortical bone for each mouse. The sum of the normalized squared residuals (Res) and the homogeneity of variance were tested to compare the different methods. In the femur, approach (i) yielded mean T2 * ± standard deviation (SD) of 657 ± 234 μs. With approach (ii), T2 *cbw , T2 *pw and bwf were 464 ± 153 μs, 15 777 ± 10 864 μs and 57.6 ± 9.9%, respectively. For approach (iii), T2 *cbw , T2 *pw and bwf were 387 ± 108 μs, 7534 ± 2765 μs and 42.5 ± 6.2%, respectively. Similar values were obtained from vertebral bodies. Res with approach (ii) was lower than with the two other approaches (p < 0.007), but T2 *pw and bwf variance was lower with approach (iii) than with approach (ii) (p < 0.048). We demonstrated that the separation and quantification of cortical bone water components with UTE sequences is feasible in vivo in mouse models. The direct bi-exponential approach exhibited the best approximation to the measured signal curve with the lowest residuals; however, the newly
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
Segmenting the Femoral Head and Acetabulum in the Hip Joint Automatically Using a Multi-Step Scheme
NASA Astrophysics Data System (ADS)
Wang, Ji; Cheng, Yuanzhi; Fu, Yili; Zhou, Shengjun; Tamura, Shinichi
We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).
Multiwavelength Observations of a Slow Raise, Multi-Step X1.6 Flare and the Associated Eruption
NASA Astrophysics Data System (ADS)
Yurchyshyn, V.
2015-12-01
Using multi-wavelength observations we studied a slow rise, multi-step X1.6 flare that began on November 7, 2014 as a localized eruption of core fields inside a δ-sunspot and later engulfed the entire active region. This flare event was associated with formation of two systems of post eruption arcades (PEAs) and several J-shaped flare ribbons showing extremely fine details, irreversible changes in the photospheric magnetic fields, and it was accompanied by a fast and wide coronal mass ejection. Data from the Solar Dynamics Observatory, IRIS spacecraft along with the ground based data from the New Solar Telescope (NST) present evidence that i) the flare and the eruption were directly triggered by a flux emergence that occurred inside a δ--sunspot at the boundary between two umbrae; ii) this event represented an example of an in-situ formation of an unstable flux rope observed only in hot AIA channels (131 and 94Å) and LASCO C2 coronagraph images; iii) the global PEA system spanned the entire AR and was due to global scale reconnection occurring at heights of about one solar radii, indicating on the global spatial and temporal scale of the eruption.
Blaum, K; Geppert, C; Schreiber, W G; Hengstler, J G; Müller, P; Nörtershäuser, W; Wendt, K; Bushaw, B A
2002-04-01
The application of high-resolution multi-step resonance ionization mass spectrometry (RIMS) to the trace determination of the rare earth element gadolinium is described. Utilizing three-step resonant excitation into an autoionizing level, both isobaric and isotopic selectivity of >10(7) were attained. An overall detection efficiency of approximately 10(-7) and an isotope specific detection limit of 1.5 x 10(9) atoms have been demonstrated. When targeting the major isotope (158)Gd, this corresponds to a total Gd detection limit of 1.6 pg. Additionally, linear response has been demonstrated over a dynamic range of six orders of magnitude. The method has been used to determine the Gd content in various normal and tumor tissue samples, taken from a laboratory mouse shortly after injection of gadolinium diethylenetriaminepentaacetic acid dimeglumine (Gd-DTPA), which is used as a contrast agent for magnetic resonance imaging (MRI). The RIMS results show Gd concentrations that vary by more than two orders of magnitude (0.07-11.5 microg mL(-1)) depending on the tissue type. This variability is similar to that observed in MRI scans that depict Gd-DTPA content in the mouse prior to dissection, and illustrates the potential for quantitative trace analysis in microsamples of biomedical materials. PMID:12012186
NASA Astrophysics Data System (ADS)
Shimizu, M.; Yamada, T.; Sasaki, K.; Takada, A.; Nomura, H.; Iguchi, F.; Yugami, H.
2015-04-01
Controlling the thermal radiation spectra of materials is one of the promising ways to advance energy system efficiency. It is well known that the thermal radiation spectrum can be controlled through the introduction of periodic surface microstructures. Herein, a method for the large-area fabrication of periodic microstructures based on multi-step wet etching is described. The method consists of three main steps, i.e., resist mask fabrication via photolithography, electrochemical wet etching, and side wall protection. Using this method, high-aspect micro-holes (0.82 aspect ratio) arrayed with hexagonal symmetry were fabricated on a stainless steel substrate. The conventional wet etching process method typically provides an aspect ratio of 0.3. The optical absorption peak attributed to the fabricated micro-hole array appeared at 0.8 μm, and the peak absorbance exceeded 0.8 for the micro-holes with a 0.82 aspect ratio. While argon plasma etching in a vacuum chamber was used in the present study for the formation of the protective layer, atmospheric plasma etching should be possible and will expand the applicability of this new method for the large-area fabrication of high-aspect materials.
Multi-step reaction mechanism for F atom interactions with organosilicate glass and SiO x films
NASA Astrophysics Data System (ADS)
Mankelevich, Yuri A.; Voronina, Ekaterina N.; Rakhimova, Tatyana V.; Palov, Alexander P.; Lopaev, Dmitry V.; Zyryanov, Sergey M.; Baklanov, Mikhail R.
2016-09-01
An ab initio approach with the density functional theory (DFT) method was used to study F atom interactions with organosilicate glass (OSG)-based low-k dielectric films. Because of the complexity and significant modifications of the OSG surface structure during the interaction with radicals and etching, a variety of reactions between the surface groups and thermal F atoms can happen. For OSG film etching and damage, we propose a multi-step mechanism based on DFT static and dynamic simulations, which is consistent with the previously reported experimental observations. The important part of the proposed mechanism is the formation of pentavalent Si atoms on the OSG surface due to a quasi-chemisorption of the incident F atoms. The revealed mechanism of F atom incorporation into the OSG matrix explains the experimentally observed phenomena of fast fluorination without significant modification of the chemical structure. We demonstrate that the pentavalent Si states induce the weakening of adjacent Si–O bonds and their breaking under F atom flux. The calculated results allow us to propose a set of elementary chemical reactions of successive removal of CH3 and CH2 groups and fluorinated SiO x matrix etching.
A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders
NASA Astrophysics Data System (ADS)
Stamatis, D.; Ermoline, A.; Dreizin, E. L.
2012-12-01
A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.
Ragazzi, M; Rada, E C
2012-10-01
In the sector of municipal solid waste management the debate on the performances of conventional and novel thermo-chemical technologies is still relevant. When a plant must be constructed, decision makers often select a technology prior to analyzing the local environmental impact of the available options, as this type of study is generally developed when the design of the plant has been carried out. Additionally, in the literature there is a lack of comparative analyses of the contributions to local air pollution from different technologies. The present study offers a multi-step approach, based on pollutant emission factors and atmospheric dilution coefficients, for a local comparative analysis. With this approach it is possible to check if some assumptions related to the advantages of the novel thermochemical technologies, in terms of local direct impact on air quality, can be applied to municipal solid waste treatment. The selected processes concern combustion, gasification and pyrolysis, alone or in combination. The pollutants considered are both carcinogenic and non-carcinogenic. A case study is presented concerning the location of a plant in an alpine region and its contribution to the local air pollution. Results show that differences among technologies are less than expected. Performances of each technology are discussed in details. PMID:22795304
ERIC Educational Resources Information Center
Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.
2014-01-01
The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…
NASA Astrophysics Data System (ADS)
Werisch, Stefan; Lennartz, Franz; Bieberle, Andre
2013-04-01
Dynamic Multi Step Outflow (MSO) experiments serve for the estimation of the parameters from soil hydraulic functions like e.g. the Mualem van Genuchten model. The soil hydraulic parameters are derived from outflow records and corresponding matric potential measurements from commonly a single tensiometer using inverse modeling techniques. We modified the experimental set up allowing for simultaneous measurements of the matric potential with three tensiometers and the water content using a high-resolution gamma-ray densiometry measurement system (Bieberle et al., 2007, Hampel et al., 2007). Different combinations of the measured time series were used for the estimation of effective soil hydraulic properties, representing different degrees of information of the "hydraulic reality" of the sample. The inverse modeling task was solved with the multimethod search algorithm AMALGAM (Vrugt et al., 2007) in combination with the Hydrus1D model (Šimúnek et al., 2008). Subsequently, the resulting effective soil hydraulic parameters allow the simulation of the MSO experiment and the comparison of model results with observations. The results show that the information of a single tensiometer together with the outflow record result in a set of effective soil hydraulic parameters producing an overall good agreement between the simulation and the observation for the location of the tensiometer. Significantly deviating results are obtained for the other tensiometer positions using this parameter set. Inclusion of more information, such as additional matric potential measurements with the according water contents within the optimization procedure lead to different, more representative hydraulic parameters which improved the overall agreement significantly. These findings indicate that more information about the soil hydraulic state variables in space and time are necessary to obtain effective soil hydraulic properties of soil core samples. Bieberle, A., Kronenberg, J., Schleicher, E
Greedy heuristic algorithm for solving series of eee components classification problems*
NASA Astrophysics Data System (ADS)
Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.
2016-04-01
Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy
GREEDI---The computerization of the DOE/DOD environmental data bank
Adams, C R; Kephart, E M
1988-01-01
One of the major responsibilities of Sandia National Laboratories is to develop shock and vibration specifications for system mechanical, electrical, and pyrotechnic components. The data required to generate these specifications are collected from finite element analyses, from laboratory simulation experiments with hardware, and from environmental tests. The production of the component specifications requires the analysis, comparison, and continual updating of these data. Sandia National Laboratories has also maintained the DOE/DOD Environmental Data Bank for over 25 years to assist in its shock and vibration efforts as well as to maintain data for several other types of environments. A means of facilitating shared access to engineering analysis data and providing an integrated environment to perform shock and vibration data analysis tasks was required. An interactive computer code and database system named GREEDI (a Graphical Resource for an Engineering Environmental Database Implementation) was developed and implemented. This transformed the DOE/DOD Environmental Data Bank from a card index system into an easily accessed computerized engineering database tool that can manage data in digitized form. GREEDI was created by interconnecting the SPEEDI (Sandia Partitioned Engineering Environmental Database Implementation) code, and the GRAFAID code, an interactive X-Y data analysis tool. An overview of the GREEDI software system is presented. 10 refs.
Emergence of social cohesion in a model society of greedy, mobile individuals.
Roca, Carlos P; Helbing, Dirk
2011-07-12
Human wellbeing in modern societies relies on social cohesion, which can be characterized by high levels of cooperation and a large number of social ties. Both features, however, are frequently challenged by individual self-interest. In fact, the stability of social and economic systems can suddenly break down as the recent financial crisis and outbreaks of civil wars illustrate. To understand the conditions for the emergence and robustness of social cohesion, we simulate the creation of public goods among mobile agents, assuming that behavioral changes are determined by individual satisfaction. Specifically, we study a generalized win-stay-lose-shift learning model, which is only based on previous experience and rules out greenbeard effects that would allow individuals to guess future gains. The most noteworthy aspect of this model is that it promotes cooperation in social dilemma situations despite very low information requirements and without assuming imitation, a shadow of the future, reputation effects, signaling, or punishment. We find that moderate greediness favors social cohesion by a coevolution between cooperation and spatial organization, additionally showing that those cooperation-enforcing levels of greediness can be evolutionarily selected. However, a maladaptive trend of increasing greediness, although enhancing individuals' returns in the beginning, eventually causes cooperation and social relationships to fall apart. Our model is, therefore, expected to shed light on the long-standing problem of the emergence and stability of cooperative behavior. PMID:21709245
Emergence of social cohesion in a model society of greedy, mobile individuals
Roca, Carlos P.; Helbing, Dirk
2011-01-01
Human wellbeing in modern societies relies on social cohesion, which can be characterized by high levels of cooperation and a large number of social ties. Both features, however, are frequently challenged by individual self-interest. In fact, the stability of social and economic systems can suddenly break down as the recent financial crisis and outbreaks of civil wars illustrate. To understand the conditions for the emergence and robustness of social cohesion, we simulate the creation of public goods among mobile agents, assuming that behavioral changes are determined by individual satisfaction. Specifically, we study a generalized win-stay-lose-shift learning model, which is only based on previous experience and rules out greenbeard effects that would allow individuals to guess future gains. The most noteworthy aspect of this model is that it promotes cooperation in social dilemma situations despite very low information requirements and without assuming imitation, a shadow of the future, reputation effects, signaling, or punishment. We find that moderate greediness favors social cohesionby a coevolution between cooperation and spatial organization, additionally showing that those cooperation-enforcing levels of greediness can be evolutionarily selected. However, a maladaptive trend of increasing greediness, although enhancing individuals’ returns in the beginning, eventually causes cooperation and social relationships to fall apart. Our model is, therefore, expected to shed light on the long-standing problem of the emergence and stability of cooperative behavior. PMID:21709245
Pishva, Ehsan; Drukker, Marjan; Viechtbauer, Wolfgang; Decoster, Jeroen; Collip, Dina; van Winkel, Ruud; Wichers, Marieke; Jacobs, Nele; Thiery, Evert; Derom, Catherine; Geschwind, Nicole; van den Hove, Daniel; Lataster, Tineke; Myin-Germeys, Inez; van Os, Jim
2014-01-01
Recent human and animal studies suggest that epigenetic mechanisms mediate the impact of environment on development of mental disorders. Therefore, we hypothesized that polymorphisms in epigenetic-regulatory genes impact stress-induced emotional changes. A multi-step, multi-sample gene-environment interaction analysis was conducted to test whether 31 single nucleotide polymorphisms (SNPs) in epigenetic-regulatory genes, i.e. three DNA methyltransferase genes DNMT1, DNMT3A, DNMT3B, and methylenetetrahydrofolate reductase (MTHFR), moderate emotional responses to stressful and pleasant stimuli in daily life as measured by Experience Sampling Methodology (ESM). In the first step, main and interactive effects were tested in a sample of 112 healthy individuals. Significant associations in this discovery sample were then investigated in a population-based sample of 434 individuals for replication. SNPs showing significant effects in both the discovery and replication samples were subsequently tested in three other samples of: (i) 85 unaffected siblings of patients with psychosis, (ii) 110 patients with psychotic disorders, and iii) 126 patients with a history of major depressive disorder. Multilevel linear regression analyses showed no significant association between SNPs and negative affect or positive affect. No SNPs moderated the effect of pleasant stimuli on positive affect. Three SNPs of DNMT3A (rs11683424, rs1465764, rs1465825) and 1 SNP of MTHFR (rs1801131) moderated the effect of stressful events on negative affect. Only rs11683424 of DNMT3A showed consistent directions of effect in the majority of the 5 samples. These data provide the first evidence that emotional responses to daily life stressors may be moderated by genetic variation in the genes involved in the epigenetic machinery. PMID:24967710
Gajos, Katarzyna; Petrou, Panagiota; Budkowski, Andrzej; Awsiuk, Kamil; Bernasik, Andrzej; Misiakos, Konstantinos; Rysz, Jakub; Raptis, Ioannis; Kakabakos, Sotirios
2015-02-21
Three multi-step multi-molecular approaches using the biotin-streptavidin system to contact-print DNA arrays on SiO2 surfaces modified with (3-glycidoxypropyl)trimethoxysilane are examined after each deposition/reaction step by atomic force microscopy, X-ray photoelectron spectroscopy and time of flight secondary ion mass spectrometry. Surface modification involves the spotting of preformed conjugates of biotinylated oligonucleotides with streptavidin onto surfaces coated with biotinylated bovine serum albumin b-BSA (approach I) or the spotting of biotinylated oligonucleotides onto a streptavidin coating, the latter prepared through a reaction with immobilized b-BSA (approach II) or direct adsorption (approach III). AFM micrographs, quantified by autocorrelation and height histogram parameters (e.g. roughness), reveal uniform coverage after each modification step with distinct nanostructures after the reaction of biotinylated BSA with streptavidin or of a streptavidin conjugate with biotinylated oligonucleotides. XPS relates the immobilization of biomolecules with covalent binding to the epoxy-silanized surface. Protein coverage, estimated from photoelectron attenuation, shows that regarding streptavidin the highest and the lowest immobilization efficiency is achieved by following approaches I and III, respectively, as confirmed by TOF-SIMS microanalysis. The size of the DNA spot reflects the contact radius of the printed droplet and increases with protein coverage (and roughness) prior to the spotting, as epoxy-silanized surfaces are hardly hydrophilic. Representative TOF-SIMS images show sub-millimeter spots: uniform for approach I, doughnut-like (with a small non-zero minimum) for approach II, both with coffee-rings or peak-shaped for approach III. Spot features, originating from pinned contact lines and DNA surface binding and revealed by complementary molecular distributions (all material, DNA, streptavidin, BSA, epoxy, SiO2), indicate two modes of droplet
Multi-Step Fibrinogen Binding to the Integrin αIIbβ3 Detected Using Force Spectroscopy
Litvinov, Rustem I.; Bennett, Joel S.; Weisel, John W.; Shuman, Henry
2005-01-01
The regulated ability of integrin αIIbβ3 to bind fibrinogen plays a crucial role in platelet aggregation and hemostasis. We have developed a model system based on laser tweezers, enabling us to measure specific rupture forces needed to separate single receptor-ligand complexes. First of all, we performed a thorough and statistically representative analysis of nonspecific protein-protein binding versus specific αIIbβ3-fibrinogen interactions in combination with experimental evidence for single-molecule measurements. The rupture force distribution of purified αIIbβ3 and fibrinogen, covalently attached to underlying surfaces, ranged from ∼20 to 150 pN. This distribution could be fit with a sum of an exponential curve for weak to moderate (20–60 pN) forces, and a Gaussian curve for strong (>60 pN) rupture forces that peaked at 80–90 pN. The interactions corresponding to these rupture force regimes differed in their susceptibility to αIIbβ3 antagonists or Mn2+, an αIIbβ3 activator. Varying the surface density of fibrinogen changed the total binding probability linearly >3.5-fold but did not affect the shape of the rupture force distribution, indicating that the measurements represent single-molecule binding. The yield strength of αIIbβ3-fibrinogen interactions was independent of the loading rate (160–16,000 pN/s), whereas their binding probability markedly correlated with the duration of contact. The aggregate of data provides evidence for complex multi-step binding/unbinding pathways of αIIbβ3 and fibrinogen revealed at the single-molecule level. PMID:16040750
Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Sun, Su-Qin; Gu, Dong-Chen; He, Helen; Xu, Chang-Hua; Wang, Xi-Chang
2016-02-01
BPO is often added to wheat flour as flour improver, but its excessive use and edibility are receiving increasing concern. A multi-step IR macro-fingerprinting was employed to identify BPO in wheat flour and unveil its changes during storage. BPO contained in wheat flour (<3.0 mg/kg) was difficult to be identified by infrared spectra with correlation coefficients between wheat flour and wheat flour samples contained BPO all close to 0.98. By applying second derivative spectroscopy, obvious differences among wheat flour and wheat flour contained BPO before and after storage in the range of 1500-1400 cm(-1) were disclosed. The peak of 1450 cm(-1) which belonged to BPO was blue shifted to 1453 cm(-1) (1455) which belonged to benzoic acid after one week of storage, indicating that BPO changed into benzoic acid after storage. Moreover, when using two-dimensional correlation infrared spectroscopy (2DCOS-IR) to track changes of BPO in wheat flour (0.05 mg/g) within one week, intensities of auto-peaks at 1781 cm(-1) and 669 cm(-1) which belonged to BPO and benzoic acid, respectively, were changing inversely, indicating that BPO was decomposed into benzoic acid. Moreover, another autopeak at 1767 cm(-1) which does not belong to benzoic acid was also rising simultaneously. By heating perturbation treatment of BPO in wheat flour based on 2DCOS-IR and spectral subtraction analysis, it was found that BPO in wheat flour not only decomposed into benzoic acid and benzoate, but also produced other deleterious substances, e.g., benzene. This study offers a promising method with minimum pretreatment and time-saving to identify BPO in wheat flour and its chemical products during storage in a holistic manner. PMID:26519920
NASA Astrophysics Data System (ADS)
Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Sun, Su-Qin; Gu, Dong-Chen; He, Helen; Xu, Chang-Hua; Wang, Xi-Chang
2016-02-01
BPO is often added to wheat flour as flour improver, but its excessive use and edibility are receiving increasing concern. A multi-step IR macro-fingerprinting was employed to identify BPO in wheat flour and unveil its changes during storage. BPO contained in wheat flour (< 3.0 mg/kg) was difficult to be identified by infrared spectra with correlation coefficients between wheat flour and wheat flour samples contained BPO all close to 0.98. By applying second derivative spectroscopy, obvious differences among wheat flour and wheat flour contained BPO before and after storage in the range of 1500-1400 cm- 1 were disclosed. The peak of 1450 cm- 1 which belonged to BPO was blue shifted to 1453 cm- 1 (1455) which belonged to benzoic acid after one week of storage, indicating that BPO changed into benzoic acid after storage. Moreover, when using two-dimensional correlation infrared spectroscopy (2DCOS-IR) to track changes of BPO in wheat flour (0.05 mg/g) within one week, intensities of auto-peaks at 1781 cm- 1 and 669 cm- 1 which belonged to BPO and benzoic acid, respectively, were changing inversely, indicating that BPO was decomposed into benzoic acid. Moreover, another autopeak at 1767 cm- 1 which does not belong to benzoic acid was also rising simultaneously. By heating perturbation treatment of BPO in wheat flour based on 2DCOS-IR and spectral subtraction analysis, it was found that BPO in wheat flour not only decomposed into benzoic acid and benzoate, but also produced other deleterious substances, e.g., benzene. This study offers a promising method with minimum pretreatment and time-saving to identify BPO in wheat flour and its chemical products during storage in a holistic manner.
NASA Astrophysics Data System (ADS)
Brochero, D.; Anctil, F.; Gagné, C.
2011-03-01
An uncertainty cascade model applied to stream flow forecasting seeks to evaluate the different sources of uncertainty of the complex rainfall-runoff process. The current trend focuses on the combination of Meteorological Ensemble Prediction Systems (MEPS) and hydrological model(s). However, the number of members of such a HEPS may rapidly increase to a level that may not be operationally sustainable. This article evaluates a 94% simplification of an initial 800-member HEPS, forcing 16 lumped rainfall-runoff models with the European Center for Medium-range Weather Forecasts (ECMWF MEPS). More specifically, it tests the time (local) and space (regional) generalization ability of the simplified 50-member HEPS obtained using a methodology that combines 4 main aspects: (i) optimizing information of the short-length series using k-folds cross-validation, (ii) implementing a backward greedy selection technique, (iii) guiding the selection with a linear combination of diversified scores, and (iv) formulating combination case studies at the cross-validation stage. At the local level, the transferability of the 9th day member selection was proven for the other 8 forecast horizons at an 82% success rate. At the regional level, a good performance was also achieved when the 50-member HEPS was applied to a neighbouring catchment within the same cluster. Diversity, defined as hydrological model complementarities addressing different aspects of a forecast, was identified as the critical factor for proper selection applications.
NASA Astrophysics Data System (ADS)
Yang, Cui-Li; Tang, Kit-Sang
2011-12-01
By considering the eigenratio of the Laplacian matrix as the synchronizability measure, this paper presents an efficient method to enhance the synchronizability of undirected and unweighted networks via rewiring. The rewiring method combines the use of tabu search and a local greedy algorithm so that an effective search of solutions can be achieved. As demonstrated in the simulation results, the performance of the proposed approach outperforms the existing methods for a large variety of initial networks, both in terms of speed and quality of solutions.
An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus
NASA Astrophysics Data System (ADS)
Zhang, Jin-Song; Nakamura, Satoshi
An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.
Greedy Set Cover Field Selection for Multi-object Spectroscopy in C++ MPI
NASA Astrophysics Data System (ADS)
Stenborg, T. N.
2015-09-01
Multi-object spectrographs allow efficient observation of clustered targets. Observational programs of many targets not encompassed within a telescope's field of view, however, require multiple pointings. Here, a greedy set cover algorithmic approach to efficient field selection in such a scenario is examined. The goal of this approach is not to minimize the total number of pointings needed to cover a given target set, but rather maximize the observational return for a restricted number of pointings. Telescope field of view and maximum targets per field are input parameters, allowing algorithm application to observation planning for the current range of active multi-object spectrographs (e.g. the 2dF/AAOmega, Fiber Large Array Multi Element Spectrograph, Fiber Multi-Object Spectrograph, Hectochelle, Hectospec and Hydra systems), and for any future systems. A parallel version of the algorithm is implemented with the message passing interface, facilitating execution on both shared and distributed memory systems.
NASA Astrophysics Data System (ADS)
Shigeta, Takemi; Young, D. L.; Liu, Chein-Shan
2012-08-01
The mixed boundary value problem of the Laplace equation is considered. The method of fundamental solutions (MFS) approximates the exact solution to the Laplace equation by a linear combination of independent fundamental solutions with different source points. The accuracy of the numerical solution depends on the distribution of source points. In this paper, a weighted greedy QR decomposition (GQRD) is proposed to choose significant source points by introducing a weighting parameter. An index called an average degree of approximation is defined to show the efficiency of the proposed method. From numerical experiments, it is concluded that the numerical solution tends to be more accurate when the average degree of approximation is larger, and that the proposed method can yield more accurate solutions with a less number of source points than the conventional GQRD.
A Fast Greedy Sparse Method of Current Sources Reconstruction for Ventricular Torsion Detection
NASA Astrophysics Data System (ADS)
Bing, Lu; Jiang, Shiqin; Chen, Mengpei; Zhao, Chen; Grönemeyer, D.; Hailer, B.; Van Leeuwen, P.
2015-09-01
A fast greedy sparse (FGS) method of cardiac equivalent current sources reconstruction is developed for non-invasive detection and quantitative analysis of individual left ventricular torsion. The cardiac magnetic field inverse problem is solved based on a distributed source model. The analysis of real 61-channel magnetocardiogram (MCG) data demonstrates that one or two dominant current source with larger strength can be identified efficiently by the FGS algorithm. Then, the left ventricle torsion during systole is examined on the basis of x, y and z coordination curves and angle change of reconstructed dominant current sources. The advantages of this method are non-invasive, visible, with higher sensitivity and resolution. It may enable the clinical detection of cardiac systolic and ejection dysfunction.
MotifMiner: A Table Driven Greedy Algorithm for DNA Motif Mining
NASA Astrophysics Data System (ADS)
Seeja, K. R.; Alam, M. A.; Jain, S. K.
DNA motif discovery is a much explored problem in functional genomics. This paper describes a table driven greedy algorithm for discovering regulatory motifs in the promoter sequences of co-expressed genes. The proposed algorithm searches both DNA strands for the common patterns or motifs. The inputs to the algorithm are set of promoter sequences, the motif length and minimum Information Content. The algorithm generates subsequences of given length from the shortest input promoter sequence. It stores these subsequences and their reverse complements in a table. Then it searches the remaining sequences for good matches of these subsequences. The Information Content score is used to measure the goodness of the motifs. The algorithm has been tested with synthetic data and real data. The results are found promising. The algorithm could discover meaningful motifs from the muscle specific regulatory sequences.
2011-01-01
Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505
Death of the (traveling) salesman: primates do not show clear evidence of multi-step route planning.
Janson, Charles
2014-05-01
Several comparative studies have linked larger brain size to a fruit-eating diet in primates and other animals. The general explanation for this correlation is that fruit is a complex resource base, consisting of many discrete patches of many species, each with distinct nutritional traits, the production of which changes predictably both within and between seasons. Using this information to devise optimal spatial foraging strategies is among the most difficult problems to solve in all of mathematics, a version of the famous Traveling Salesman Problem. Several authors have suggested that primates might use their large brains and complex cognition to plan foraging strategies that approximate optimal solutions to this problem. Three empirical studies have examined how captive primates move when confronted with the simplest version of the problem: a spatial array of equally valuable goals. These studies have all concluded that the subjects remember many food source locations and show very efficient travel paths; some authors also inferred that the subjects may plan their movements based on considering combinations of three or more future goals at a time. This analysis re-examines critically the claims of planned movement sequences from the evidence presented. The efficiency of observed travel paths is largely consistent with use of the simplest of foraging rules, such as visiting the nearest unused "known" resource. Detailed movement sequences by test subjects are most consistent with a rule that mentally sums spatial information from all unused resources in a given trial into a single "gravity" measure that guides movements to one destination at a time. PMID:23934927
Zhang, Wenle; Liu, Jianchang; Wang, Honghai
2015-09-01
This paper deals with the ultra-fast formation control problem of high-order discrete-time multi-agent systems. Using the local neighbor-error knowledge, a novel ultra-fast protocol with multi-step predictive information and self-feedback term is proposed. The asymptotic convergence factor is improved by a power of q+1 compared to the routine protocol. To some extent, the ultra-fast algorithm overcomes the influence of communication topology to the convergence speed. Furthermore, some sufficient conditions are given herein. The ones decouple the design of the synchronizing gains from the detailed graph properties, and explicitly reveal how the agent dynamic and the communication graph jointly affect the ultra-fast formationability. Finally, some simulations are worked out to illustrate the effectiveness of our theoretical results. PMID:26051965
NASA Astrophysics Data System (ADS)
Dyar, Scott M.; Smeigh, Amanda L.; Karlen, Steven D.; Young, Ryan M.; Wasielewski, Michael R.
2015-06-01
The excited state and redox properties of a new bi-functional perylene redox chromophore, 2,3-dihydro-1-azabenzo[cd]perylene (DABP), are described. Perylene has been widely used in electron donor-acceptor molecules in fields ranging from artificial photosynthesis to molecular spintronics. However, attaching multiple redox components to perylene to carry out multi-step electron transfer reactions often produces hard to separate regioisomers, which complicate data analysis. The use of DABP provides a strategy to retain the electronic properties of perylene, yet eliminate regioisomers. Ultrafast photo-initiated single- and two-step electron transfer reactions in three linear electron donor-acceptor systems incorporating DABP are described to illustrate its utility.
NASA Astrophysics Data System (ADS)
Lin, Chun-Cheng; Tang, Jian-Fu; Su, Hsiu-Hsien; Hong, Cheng-Shong; Huang, Chih-Yu; Chu, Sheng-Yuan
2016-06-01
The multi-step resistive switching (RS) behavior of a unipolar Pt/Li0.06Zn0.94O/Pt resistive random access memory (RRAM) device is investigated. It is found that the RRAM device exhibits normal, 2-, 3-, and 4-step RESET behaviors under different compliance currents. The transport mechanism within the device is investigated by means of current-voltage curves, in-situ transmission electron microscopy, and electrochemical impedance spectroscopy. It is shown that the ion transport mechanism is dominated by Ohmic behavior under low electric fields and the Poole-Frenkel emission effect (normal RS behavior) or Li+ ion diffusion (2-, 3-, and 4-step RESET behaviors) under high electric fields.
Zhao, Huiying; Xu, Jin; Ghebrezadik, Helen; Hylands, Peter J
2015-10-10
Ginseng, mainly Asian ginseng and American ginseng, is the most widely consumed herbal product in the world . However, the existing quality control method is not adequate: adulteration is often seen in the market. In this study, 31 batches of ginseng from Chinese stores were analyzed using (1)H NMR metabolite profiles together with multi-step principal component analysis. The most abundant metabolites, sugars, were excluded from the NMR spectra after the first principal component analysis, in order to reveal differences contributed by less abundant metabolites. For the first time, robust, distinctive and representative differences of Asian ginseng from American ginseng were found and the key metabolites responsible were identified as sucrose, glucose, arginine, choline, and 2-oxoglutarate and malate. Differences between wild and cultivated ginseng were identified as ginsenosides. A substitute cultivated American ginseng was noticed. These results demonstrated that the combination of (1)H NMR and PCA is effective in quality control of ginseng. PMID:26037159
Erdemir, Ugur; Sancakli, Hande Sar; Yildiz, Esra
2012-01-01
Objectives: The objective of this in vitro study was to evaluate the surface roughness and micro-hardness of three novel resin composites containing nanoparticles after polishing with one-step and conventional multi-step polishing systems. Methods: A total of 126 specimens (10 X 2 mm) were prepared in a metal mold using three nano-composites (Filtek Supreme XT, Ceram-X, and Grandio), 21 specimens of each resin composite for both tests (n=63 for each test). Following light curing, seven specimens from each group received no polishing treatment and served as controls for both tests. The specimens were randomly polished using PoGo and Sof-Lex systems for 30 seconds after being wet-ground with 1200-grit silicon carbide paper. The mean surface roughness of each polished specimen was determined with a profilometer. The microhardness was determined using a Vickers hardness measuring instrument with a 200-g load and 15 seconds dwell time. The data were analyzed using the Kruskal-Wallis test and the post hoc Dunn's multiple comparison tests at a significance level of .05. Results: Among all materials, the smoothest surfaces were obtained under a matrix strip (control) (P<.05). There were no statistically significant differences among polishing systems in the resin composites for surface roughness (P>.05). The lowest hardness values for the three resin composites were obtained with a matrix strip, and there was a statistically significant difference compared with other polishing systems (P<.05) whereas no statistically significant differences were observed between the polishing systems (P>.05). Conclusion: The current one-step polishing system appears to be as effective as multi-step systems and may be preferable for polishing resin composite restorations. PMID:22509124
Vaisocherová-Lísalová, Hana; Víšová, Ivana; Ermini, Maria Laura; Špringer, Tomáš; Song, Xue Chadtová; Mrázek, Jan; Lamačová, Josefína; Scott Lynn, N; Šedivák, Petr; Homola, Jiří
2016-06-15
Recent outbreaks of foodborne illnesses have shown that foodborne bacterial pathogens present a significant threat to public health, resulting in an increased need for technologies capable of fast and reliable screening of food commodities. The optimal method of pathogen detection in foods should: (i) be rapid, specific, and sensitive; (ii) require minimum sample preparation; and (iii) be robust and cost-effective, thus enabling use in the field. Here we report the use of a SPR biosensor based on ultra-low fouling and functionalizable poly(carboxybetaine acrylamide) (pCBAA) brushes for the rapid and sensitive detection of bacterial pathogens in crude food samples utilizing a three-step detection assay. We studied both the surface resistance to fouling and the functional capabilities of these brushes with respect to each step of the assay, namely: (I) incubation of the sensor with crude food samples, resulting in the capture of bacteria by antibodies immobilized to the pCBAA coating, (II) binding of secondary biotinylated antibody (Ab2) to previously captured bacteria, and (III) binding of streptavidin-coated gold nanoparticles to the biotinylated Ab2 in order to enhance the sensor response. We also investigated the effects of the brush thickness on the biorecognition capabilities of the gold-grafted functionalized pCBAA coatings. We demonstrate that pCBAA-compared to standard low-fouling OEG-based alkanethiolate self-assemabled monolayers-exhibits superior surface resistance regarding both fouling from complex food samples as well as the non-specific binding of S-AuNPs. We further demonstrate that a SPR biosensor based on a pCBAA brush with a thickness as low as 20 nm was capable of detecting E. coli O157:H7 and Salmonella sp. in complex hamburger and cucumber samples with extraordinary sensitivity and specificity. The limits of detection for the two bacteria in cucumber and hamburger extracts were determined to be 57 CFU/mL and 17 CFU/mL for E. coli and 7.4 × 10
NASA Astrophysics Data System (ADS)
Ma, Hui; Zhou, Haijun
2011-05-01
In this brief report we explore the energy landscapes of two spin glass models using a greedy single-spin flipping process, Gmax. The ground-state energy density of the random maximum two-satisfiability problem is efficiently approached by Gmax. The achieved energy density e(t) decreases with the evolution time t as e(t)-e(∞)=h(log10t)-z with a small prefactor h and a scaling coefficient z>1, indicating an energy landscape with deep and rugged funnel-shape regions. For the ±J Viana-Bray spin glass model, however, the greedy single-spin dynamics quickly gets trapped to a local minimal region of the energy landscape.
Computer-Assisted Test Assembly Using Optimization Heuristics.
ERIC Educational Resources Information Center
Leucht, Richard M.
1998-01-01
Presents a variation of a "greedy" algorithm that can be used in test-assembly problems. The algorithm, the normalized weighted absolute-deviation heuristic, selects items to have a locally optimal fit to a moving set of average criterion values. Demonstrates application of the model. (SLD)
Greedy data transportation scheme with hard packet deadlines for wireless ad hoc networks.
Lee, HyungJune
2014-01-01
We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services. PMID:25258736
Cai, Chuangjian; Zhang, Lin; Cai, Wenjuan; Zhang, Dong; Lv, Yanlu; Luo, Jianwen
2016-01-01
In order to improve the spatial resolution of time-domain (TD) fluorescence molecular lifetime tomography (FMLT), an accelerated nonlinear orthogonal matching pursuit (ANOMP) algorithm is proposed. As a kind of nonlinear greedy sparsity-constrained methods, ANOMP can find an approximate solution of L0 minimization problem. ANOMP consists of two parts, i.e., the outer iterations and the inner iterations. Each outer iteration selects multiple elements to expand the support set of the inverse lifetime based on the gradients of a mismatch error. The inner iterations obtain an intermediate estimate based on the support set estimated in the outer iterations. The stopping criterion for the outer iterations is based on the stability of the maximum reconstructed values and is robust for problems with targets at different edge-to-edge distances (EEDs). Phantom experiments with two fluorophores at different EEDs and in vivo mouse experiments demonstrate that ANOMP can provide high quantification accuracy, even if the EED is relatively small, and high resolution. PMID:27446648
Greedy Data Transportation Scheme with Hard Packet Deadlines for Wireless Ad Hoc Networks
Lee, HyungJune
2014-01-01
We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services. PMID:25258736
NASA Astrophysics Data System (ADS)
Prados, A. I.; Gupta, P.; Mehta, A. V.; Schmidt, C.; Blevins, B.; Carleton-Hug, A.; Barbato, D.
2014-12-01
NASA's Applied Remote Sensing Training Program (ARSET), http://arset.gsfc.nasa.gov, within NASA's Applied Sciences Program, has been providing applied remote sensing training since 2008. The goals of the program are to develop the technical and analytical skills necessary to utilize NASA resources for decision-support, and to help end-users navigate through the vast data resources freely available. We discuss our multi-step approach to improving data access and use of NASA satellite and model data for air quality, water resources, disaster, and land management. The program has reached over 1600 participants world wide using a combined online and interactive approach. We will discuss lessons learned as well as best practices and success stories in improving the use of NASA Earth Science resources archived at multiple data centers by end-users in the private and public sectors. ARSET's program evaluation method for improving the program and assessing the benefits of trainings to U.S and international organizations will also be described.
Marquette, Ian; Quesne, Christiane
2014-11-15
Type III multi-step rationally extended harmonic oscillator and radial harmonic oscillator potentials, characterized by a set of k integers m{sub 1}, m{sub 2}, ⋯, m{sub k}, such that m{sub 1} < m{sub 2} < ⋯ < m{sub k} with m{sub i} even (resp. odd) for i odd (resp. even), are considered. The state-adding and state-deleting approaches to these potentials in a supersymmetric quantum mechanical framework are combined to construct new ladder operators. The eigenstates of the Hamiltonians are shown to separate into m{sub k} + 1 infinite-dimensional unitary irreducible representations of the corresponding polynomial Heisenberg algebras. These ladder operators are then used to build a higher-order integral of motion for seven new infinite families of superintegrable two-dimensional systems separable in cartesian coordinates. The finite-dimensional unitary irreducible representations of the polynomial algebras of such systems are directly determined from the ladder operator action on the constituent one-dimensional Hamiltonian eigenstates and provide an algebraic derivation of the superintegrable systems whole spectrum including the level total degeneracies.
Xiong, Hanzhen; Li, Qiulian; Chen, Ruichao; Liu, Shaoyan; Lin, Qiongyan; Xiong, Zhongtang; Jiang, Qingping; Guo, Linlang
2016-01-01
We aimed to identify endometrioid endometrial carcinoma (EEC)-related gene signatures using a multi-step miRNA-mRNA regulatory network construction approach. Pathway analysis showed that 61 genes were enriched on many carcinoma-related pathways. Among the 14 highest scoring gene signatures, six genes had been previously shown to be endometrial carcinoma. By qRT-PCR and next generation sequencing, we found that a gene signature (CPEB1) was significantly down-regulated in EEC tissues, which may be caused by hsa-miR-183-5p up-regulation. In addition, our literature surveys suggested that CPEB1 may play an important role in EEC pathogenesis by regulating the EMT/p53 pathway. The miRNA-mRNA network is worthy of further investigation with respect to the regulatory mechanisms of miRNAs in EEC. CPEB1 appeared to be a tumor suppressor in EEC. Our results provided valuable guidance for the functional study at the cellular level, as well as the EEC mouse models. PMID:27271671
Grain refinement in a AlZnMgCuTi alloy by intensive melt shearing: A multi-step nucleation mechanism
NASA Astrophysics Data System (ADS)
Li, H. T.; Xia, M.; Jarry, Ph.; Scamans, G. M.; Fan, Z.
2011-01-01
Direct chill (DC) cast ingots of wrought Al alloys conventionally require the deliberate addition of a grain refiner to provide a uniform as-cast microstructure for the optimisation of both mechanical properties and processability. Grain refiner additions have been in widespread industrial use for more than half a century. Intensive melt shearing can provide grain refinement without the need for a specific grain refiner addition for both magnesium and aluminium based alloys. In this paper we present experimental evidence of the grain refinement in an experimental wrought aluminium alloy achieved by intensive melt shearing in the liquid state prior to solidification. The mechanisms for high shear induced grain refinement are correlated with the evolution of oxides in alloys. The oxides present in liquid aluminium alloys, normally as oxide films and clusters, can be effectively dispersed by intensive shearing and then provide effective sites for the heterogeneous nucleation of Al 3Ti phase. As a result, Al 3Ti particles with a narrower size distribution and hence improved efficiency as active nucleation sites of α-aluminium grains are responsible for the achieved significant grain refinement. This is termed a multi-step nucleation mechanism.
NASA Astrophysics Data System (ADS)
Xu, Rong; Sun, Suqin; Zhu, Weicheng; Xu, Changhua; Liu, Yougang; Shen, Liang; Shi, Yue; Chen, Jun
2014-07-01
The genus Cistanche generally has four species in China, including C. deserticola (CD), C. tubulosa (CT), C. salsa (CS) and C. sinensis (CSN), among which CD and CT are official herbal sources of Cistanche Herba (CH). To clarify the sources of CH and ensure the clinical efficacy and safety, a multi-step IR macro-fingerprint method was developed to analyze and evaluate the ethanol extracts of the four species. Through this method, the four species were distinctively distinguished, and the main active components phenylethanoid glycosides (PhGs) were estimated rapidly according to the fingerprint features in the original IR spectra, second derivative spectra, correlation coefficients and 2D-IR correlation spectra. The exclusive IR fingerprints in the spectra including the positions, shapes and numbers of peaks indicated that constitutes of CD were the most abundant, and CT had the highest level of PhGs. The results deduced by some macroscopic features in IR fingerprint were in agreement with the HPLC fingerprint of PhGs from the four species, but it should be noted that the IR provided more chemical information than HPLC. In conclusion, with the advantages of high resolution, cost effective and speediness, the macroscopic IR fingerprint method should be a promising analytical technique for discriminating extremely similar herbal medicine, monitoring and tracing the constituents of different extracts and even for quality control of the complex systems such as TCM.
NASA Astrophysics Data System (ADS)
Hatta, Kohei; Nakajima, Yohei; Isoda, Erika; Itoh, Mariko; Yamamoto, Tamami
The brain is one of the most complicated structures in nature. Zebrafish is a useful model to study development of vertebrate brain, because it is transparent at early embryonic stage and it develops rapidly outside of the body. We made a series of transgenic zebrafish expressing green-fluorescent protein related molecules, for example, Kaede and KikGR, whose green fluorescence can be irreversibly converted to red upon irradiation with ultra-violet (UV) or violet light, and Dronpa, whose green fluorescence is eliminated with strong blue light but can be reactivated upon irradiation with UV or violet-light. We have recently shown that infrared laser evoked gene operator (IR-LEGO) which causes a focused heat shock could locally induce these fluorescent proteins and the other genes. Neural cell migration and axonal pattern formation in living brain could be visualized by this technique. We also can express channel rhodopsine 2 (ChR2), a photoactivatable cation channel, or Natronomonas pharaonis halorhodopsin (NpHR), a photoactivatable chloride ion pump, locally in the nervous system by IR. Then, behaviors of these animals can be controlled by activating or silencing the local neurons by light. This novel strategy is useful in discovering neurons and circuits responsible for a wide variety of animal behaviors. We proposed to call this method ‘multi-stepped optogenetics’.
Magmatically Greedy Reararc Volcanoes of the N. Tofua Segment of the Tonga Arc
NASA Astrophysics Data System (ADS)
Rubin, K. H.; Embley, R. W.; Arculus, R. J.; Lupton, J. E.
2013-12-01
Volcanism along the northernmost Tofua Arc is enigmatic because edifices of the arc's volcanic front are mostly, magmatically relatively anemic, despite the very high convergence rate of the Pacific Plate with this section of Tonga Arc. However, just westward of the arc front, in terrain generally thought of as part of the adjacent NE Lau Backarc Basin, lie a series of very active volcanoes and volcanic features, including the large submarine caldera Niuatahi (aka volcano 'O'), a large composite dacite lava flow terrain not obviously associated with any particular volcanic edifice, and the Mata volcano group, a series of 9 small elongate volcanoes in an extensional basin at the extreme NE corner of the Lau Basin. These three volcanic terrains do not sit on arc-perpendicular cross chains. Collectively, these volcanic features appear to be receiving a large proportion of the magma flux from the sub-Tonga/Lau mantle wedge, in effect 'stealing' this magma flux from the arc front. A second occurrence of such magma 'capture' from the arc front occurs in an area just to the south, on southernmost portion of the Fonualei Spreading Center. Erupted compositions at these 'magmatically greedy' volcanoes are consistent with high slab-derived fluid input into the wedge (particularly trace element abundances and volatile contents, e.g., see Lupton abstract this session). It is unclear how long-lived a feature this is, but the very presence of such hyperactive and areally-dispersed volcanism behind the arc front implies these volcanoes are not in fact part of any focused spreading/rifting in the Lau Backarc Basin, and should be thought of as 'reararc volcanoes'. Possible tectonic factors contributing to this unusually productive reararc environment are the high rate of convergence, the cold slab, the highly disorganized extension in the adjacent backarc, and the tear in the subducting plate just north of the Tofua Arc.
Flores, Glenn
2002-07-01
Cinematic depictions of physicians potentially can affect public expectations and the patient-physician relationship, but little attention has been devoted to portrayals of physicians in movies. The objective of the study was the analysis of cinematic depictions of physicians to determine common demographic attributes of movie physicians, major themes, and whether portrayals have changed over time. All movies released on videotape with physicians as main characters and readily available to the public were viewed in their entirety. Data were collected on physician characteristics, diagnoses, and medical accuracy, and dialogue concerning physicians was transcribed. The results showed that in the 131 films, movie physicians were significantly more likely to be male (p < 0.00001), White (p < 0.00001), and < 40 years of age (p < 0.009). The proportion of women and minority film physicians has declined steadily in recent decades. Movie physicians are most commonly surgeons (33%), psychiatrists (26%), and family practitioners (18%). Physicians were portrayed negatively in 44% of movies, and since the 1960s positive portrayals declined while negative portrayals increased. Physicians frequently are depicted as greedy, egotistical, uncaring, and unethical, especially in recent films. Medical inaccuracies occurred in 27% of films. Compassion and idealism were common in early physician movies but are increasingly scarce in recent decades. A recurrent theme is the "mad scientist," the physician-researcher that values research more than patients' welfare. Portrayals of physicians as egotistical and materialistic have increased, whereas sexism and racism have waned. Movies from the past two decades have explored critical issues surrounding medical ethics and managed care. We conclude that negative cinematic portrayals of physicians are on the rise, which may adversely affect patient expectations and the patient-physician relationship. Nevertheless, films about physicians can
Flores, Glenn
2002-01-01
Cinematic depictions of physicians potentially can affect public expectations and the patient-physician relationship, but little attention has been devoted to portrayals of physicians in movies. The objective of the study was the analysis of cinematic depictions of physicians to determine common demographic attributes of movie physicians, major themes, and whether portrayals have changed over time. All movies released on videotape with physicians as main characters and readily available to the public were viewed in their entirety. Data were collected on physician characteristics, diagnoses, and medical accuracy, and dialogue concerning physicians was transcribed. The results showed that in the 131 films, movie physicians were significantly more likely to be male (p < 0.00001), White (p < 0.00001), and < 40 years of age (p < 0.009). The proportion of women and minority film physicians has declined steadily in recent decades. Movie physicians are most commonly surgeons (33%), psychiatrists (26%), and family practitioners (18%). Physicians were portrayed negatively in 44% of movies, and since the 1960s positive portrayals declined while negative portrayals increased. Physicians frequently are depicted as greedy, egotistical, uncaring, and unethical, especially in recent films. Medical inaccuracies occurred in 27% of films. Compassion and idealism were common in early physician movies but are increasingly scarce in recent decades. A recurrent theme is the "mad scientist," the physician-researcher that values research more than patients' welfare. Portrayals of physicians as egotistical and materialistic have increased, whereas sexism and racism have waned. Movies from the past two decades have explored critical issues surrounding medical ethics and managed care. We conclude that negative cinematic portrayals of physicians are on the rise, which may adversely affect patient expectations and the patient-physician relationship. Nevertheless, films about physicians can
A greedy-based multiquadric method for LiDAR-derived ground data reduction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Yan, Changqing; Cao, Xuewei; Guo, Jinyun; Dai, Honglei
2015-04-01
A new greedy-based multiquadric method (MQ-G) has been developed to perform LiDAR-derived ground data reduction by selecting a certain amount of significant terrain points from the raw dataset to keep the accuracy of the constructed DEMs as high as possible, while maximally retaining terrain features. In the process of MQ-G, the significant terrain points were selected with an iterative process. First, the points with the maximum and minimum elevations were selected as the initial significant points. Next, a smoothing MQ was employed to perform an interpolation with the selected critical points. Then, the importance of all candidate points was assessed by interpolation error (i.e. the absolute difference between the interpolated and actual elevations). Lastly, the most significant point in the current iteration was selected and used for point selection in the next iteration. The process was repeated until the number of selected points reached a pre-set level or no point was found to have the interpolation error exceeding a user-specified accuracy tolerance. In order to avoid the huge computing cost, a new technique was presented to quickly solve the systems of MQ equations in the global interpolation process, and then the global MQ was replaced with the local one when a certain amount of critical points were selected. Four study sites with different morphologies (i.e. flat, undulating, hilly and mountainous terrains) were respectively employed to comparatively analyze the performances of MQ-G and the classical data selection methods including maximum z-tolerance (Max-Z) and the random method for reducing LiDAR-derived ground datasets. Results show that irrespective of the number of selected critical points and terrain characteristics, MQ-G is always more accurate than the other methods for DEM construction. Moreover, MQ-G has a better ability of preserving terrain feature lines, especially for the undulating and hilly terrains.
NASA Astrophysics Data System (ADS)
Yang, J.-S.; Yu, S.-P.; Liu, G.-M.
2013-12-01
In order to increase the accuracy of serial-propagated long-range multi-step-ahead (MSA) prediction, which has high practical value but also great implementary difficulty because of huge error accumulation, a novel wavelet neural network hybrid model - CDW-NN - combining continuous and discrete wavelet transforms (CWT and DWT) and neural networks (NNs), is designed as the MSA predictor for the effective long-term forecast of hydrological signals. By the application of 12 types of hybrid and pure models in estuarine 1096-day river stages forecasting, the different forecast performances and the superiorities of CDW-NN model with corresponding driving mechanisms are discussed. One type of CDW-NN model, CDW-NF, which uses neuro-fuzzy as the forecast submodel, has been proven to be the most effective MSA predictor for the prominent accuracy enhancement during the overall 1096-day long-term forecasts. The special superiority of CDW-NF model lies in the CWT-based methodology, which determines the 15-day and 28-day prior data series as model inputs by revealing the significant short-time periodicities involved in estuarine river stage signals. Comparing the conventional single-step-ahead-based long-term forecast models, the CWT-based hybrid models broaden the prediction range in each forecast step from 1 day to 15 days, and thus reduce the overall forecasting iteration steps from 1096 steps to 74 steps and finally create significant decrease of error accumulations. In addition, combination of the advantages of DWT method and neuro-fuzzy system also benefits filtering the noisy dynamics in model inputs and enhancing the simulation and forecast ability for the complex hydro-system.
NASA Astrophysics Data System (ADS)
Yang, J.-S.; Yu, S.-P.; Liu, G.-M.
2013-07-01
In order to increase the accuracy of serial-propagated long-range multi-step-ahead (MSA) prediction, which has high practical value but also great difficulty to conduct because of huge error accumulation, a novel wavelet-NN hybrid model CDW-NN, combining continuous and discrete wavelet transforms (CWT and DWT) and neural networks (NN), is designed as the MSA predictor for effective long-term forecast of hydrological signals. By the application of 12 types of hybrid and pure models in estuarine 1096 day river stage series forecasting, different forecast performances and the superiorities of CDW-NN model with corresponding driving mechanisms are discussed, and one type of CDW-NN model (CDW-NF), which uses Neuro-Fuzzy as the forecast submodel, has been proven to be the most effective MSA predictor for the accuracy enhancement in the overall 1096 days long-term forecast. The special superiority of CDW-NF model lies in the CWT based methodology, which determines the 15 and 28 day prior data series as model inputs by revealing the significant short-time periodicities involved in estuarine river stage signals. Comparing conventional single-step-ahead based long-term forecast models, the CWT based hybrid models broaden the prediction range in each forecast step from 1 day to 15 days, thus reduce the overall forecasting iteration steps from 1096 steps to 74 steps and finally creates significant decrease of error accumulations. In addition, combination of the advantages of DWT method and Neuro-Fuzzy system also very benefit filtering the noisy dynamics for model inputs and enhancing the simulation and forecast ability of the complex hydro-system.
Fan, Jian Ping; Kalia, Priya; Di Silvio, Lucy; Huang, Jie
2014-03-01
A multi-step sol-gel process was employed to synthesize bioactive glass (BG) nanoparticles. Transmission electron microscopy (TEM) revealed that the BG nanoparticles were spherical and ranged from 30 to 60 nm in diameter. In vitro reactivity of the BG nanoparticles was tested in phosphate buffer saline (PBS), Tris-buffer (TRIS), simulated body fluid (SBF), and Dulbecco's modified Eagle's medium (DMEM), in comparison with similar sized hydroxyapatite (HA) and silicon substituted HA (SiHA) nanoparticles. Bioactivity of the BG nanoparticles was confirmed through Fourier transform infrared spectroscopy (FTIR) analysis. It was found that bone-like apatite was formed after immersion in SBF at 7 days. Solutions containing BG nanoparticles were slightly more alkaline than HA and SiHA, suggesting that a more rapid apatite formation on BG was related to solution-mediated dissolution. Primary human osteoblast (HOB) cell model was used to evaluate biological responses to BG nanoparticles. Lactate dehydrogenase (LDH) cytotoxicity assay showed that HOB cells were not adversely affected by the BG nanoparticles throughout the 7day test period. Interestingly, MTS assay results showed an enhancement in cell proliferation in the presence of BG when compared to HA and SiHA nanoparticles. Particularly, statistically significant (p<0.05) alkaline phosphatase (ALP) activity of HOB cells was found on the culture containing BG nanoparticles, suggesting that the cell differentiation might be promoted by BG. Real-time quantitative PCR analysis (qPCR) further confirmed this finding, as a significantly higher level of RUNX2 gene expression was recorded on the cells cultured in the presence of BG nanoparticles when compared to those with HA and SiHA. PMID:24433905
Boolean methods of optimization over independence systems
Hulme, B.L.
1983-01-01
This paper presents both a direct and an iterative method of solving the combinatorial optimization problem associated with any independence system. The methods use Boolean algebraic computations to produce solutions. In addition, the iterative method employs a version of the greedy algorithm both to compute upper bounds on the optimum value and to produce the additional circuits needed at every stage. The methods are extensions of those used to solve a problem of fire protection at nuclear reactor power plants.
2012-01-01
Background Various antigen-specific immunoassays are available for the serological diagnosis of autoimmune bullous diseases. However, a spectrum of different tissue-based and monovalent antigen-specific assays is required to establish the diagnosis. BIOCHIP mosaics consisting of different antigen substrates allow polyvalent immunofluorescence (IF) tests and provide antibody profiles in a single incubation. Methods Slides for indirect IF were prepared, containing BIOCHIPS with the following test substrates in each reaction field: monkey esophagus, primate salt-split skin, antigen dots of tetrameric BP180-NC16A as well as desmoglein 1-, desmoglein 3-, and BP230gC-expressing human HEK293 cells. This BIOCHIP mosaic was probed using a large panel of sera from patients with pemphigus vulgaris (PV, n = 65), pemphigus foliaceus (PF, n = 50), bullous pemphigoid (BP, n = 42), and non-inflammatory skin diseases (n = 97) as well as from healthy blood donors (n = 100). Furthermore, to evaluate the usability in routine diagnostics, 454 consecutive sera from patients with suspected immunobullous disorders were prospectively analyzed in parallel using a) the IF BIOCHIP mosaic and b) a panel of single antibody assays as commonly used by specialized centers. Results Using the BIOCHIP mosaic, sensitivities of the desmoglein 1-, desmoglein 3-, and NC16A-specific substrates were 90%, 98.5% and 100%, respectively. BP230 was recognized by 54% of the BP sera. Specificities ranged from 98.2% to 100% for all substrates. In the prospective study, a high agreement was found between the results obtained by the BIOCHIP mosaic and the single test panel for the diagnosis of BP, PV, PF, and sera without serum autoantibodies (Cohen’s κ between 0.88 and 0.97). Conclusions The BIOCHIP mosaic contains sensitive and specific substrates for the indirect IF diagnosis of BP, PF, and PV. Its diagnostic accuracy is comparable with the conventional multi-step approach. The highly
Hierarchical models and iterative optimization of hybrid systems
NASA Astrophysics Data System (ADS)
Rasina, Irina V.; Baturina, Olga V.; Nasatueva, Soelma N.
2016-06-01
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Saethre, Eirik; Stadler, Jonathan
2013-03-01
As clinical trial research increasingly permeates sub-Saharan Africa, tales of purposeful HIV infection, blood theft, and other harmful outcomes are widely reported by participants and community members. Examining responses to the Microbicide Development Programme 301-a randomized, double-blind, placebo-controlled microbicide trial-we investigate the ways in which these accounts embed medical research within postcolonial contexts. We explore three popular narratives circulating around the Johannesburg trial site: malicious whites killing participants and selling their blood, greedy women enrolling in the trial solely for financial gain, and virtuous volunteers attempting to ensure their health and aid others through trial participation. We argue that trial participants and community members transform medical research into a meaningful tool that alternately affirms, debates, and challenges contemporary social relations. PMID:23674325
Zhu, Chuan; Zhang, Sai; Han, Guangjie; Jiang, Jinfang; Rodrigues, Joel J P C
2016-01-01
Mobile sink is widely used for data collection in wireless sensor networks. It can avoid 'hot spot' problems but energy consumption caused by multihop transmission is still inefficient in real-time application scenarios. In this paper, a greedy scanning data collection strategy (GSDCS) is proposed, and we focus on how to reduce routing energy consumption by shortening total length of routing paths. We propose that the mobile sink adjusts its trajectory dynamically according to the changes of network, instead of predetermined trajectory or random walk. Next, the mobile sink determines which area has more source nodes, then it moves toward this area. The benefit of GSDCS is that most source nodes are no longer needed to upload sensory data for long distances. Especially in event-driven application scenarios, when event area changes, the mobile sink could arrive at the new event area where most source nodes are located currently. Hence energy can be saved. Analytical and simulation results show that compared with existing work, our GSDCS has a better performance in specific application scenarios. PMID:27608022
A GREEDY METHOD FOR RECONSTRUCTING POLYCRYSTALS FROM THREE-DIMENSIONAL X-RAY DIFFRACTION DATA.
Kulshreshth, Arun K; Alpers, Andreas; Herman, Gabor T; Knudsen, Erik; Rodek, Lajos; Poulsen, Henning F
2009-02-01
An iterative search method is proposed for obtaining orientation maps inside polycrystals from three-dimensional X-ray diffraction (3DXRD) data. In each step, detector pixel intensities are calculated by a forward model based on the current estimate of the orientation map. The pixel at which the experimentally measured value most exceeds the simulated one is identified. This difference can only be reduced by changing the current estimate at a location from a relatively small subset of all possible locations in the estimate and, at each such location, an increase at the identified pixel can only be achieved by changing the orientation in only a few possible ways. The method selects the location/orientation pair indicated as best by a function that measures data consistency combined with prior information on orientation maps. The superiority of the method to a previously published forward projection Monte Carlo optimization is demonstrated on simulated data. PMID:20126520
Melnik, Eva; Bruck, Roman; Hainberger, Rainer; Lämmerhofer, Michael
2011-08-12
The process of surface functionalization involving silanization, biotinylation and streptavidin bonding as platform for biospecific ligand immobilization was optimized for thin film polyimide spin-coated silicon wafers, of which the polyimide film serves as a wave guiding layer in evanescent wave photonic biosensors. This type of optical sensors make great demands on the materials involved as well as on the layer properties, such as the optical quality, the layer thickness and the surface roughness. In this work we realized the binding of a 3-mercaptopropyl trimethoxysilane on an oxygen plasma activated polyimide surface followed by subsequent derivatization of the reactive thiol groups with maleimide-PEG(2)-biotin and immobilization of streptavidin. The progress of the functionalization was monitored by using different fluorescence labels for optimization of the chemical derivatization steps. Further, X-ray photoelectron spectroscopy and atomic force microscopy were utilized for the characterization of the modified surface. These established analytical methods allowed to derive information like chemical composition of the surface, surface coverage with immobilized streptavidin, as well as parameters of the surface roughness. The proposed functionalization protocol furnished a surface density of 144 fmol mm(-2) streptavidin with good reproducibility (13.9% RSD, n=10) and without inflicted damage to the surface. This surface modification was applied to polyimide based Mach-Zehnder interferometer sensors to realize a real-time measurement of streptavidin binding validating the functionality of the MZI biosensor. Subsequently, this streptavidin surface was employed to immobilize biotinylated single-stranded DNA and utilized for monitoring of selective DNA hybridization. These proved the usability of polyimide based evanescent photonic devices for biosensing application. PMID:21704776
Li, Cong; Li, Hui; Sun, Jin; Zhang, XinYue; Shi, Jinsong; Xu, Zhenghong
2016-08-01
Hydroxylation of dehydroepiandrosterone (DHEA) to 3β,7α,15α-trihydroxy-5-androstene-17-one (7α,15α-diOH-DHEA) by Colletotrichum lini ST-1 is an essential step in the synthesis of many steroidal drugs, while low DHEA concentration and 7α,15α-diOH-DHEA production are tough problems to be solved urgently in industry. In this study, the significant improvement of 7α,15α-diOH-DHEA yield in 5-L stirred fermenter with 15 g/L DHEA was achieved. To maintain a sufficient quantity of glucose for the bioconversion, glucose of 15 g/L was fed at 18 h, the 7α,15α-diOH-DHEA yield and dry cell weight were increased by 17.7 and 30.9 %, respectively. Moreover, multi-step DHEA addition strategy was established to diminish DHEA toxicity to C. lini, and the 7α,15α-diOH-DHEA yield raised to 53.0 %. Further, a novel strategy integrating glucose-feeding with multi-step addition of DHEA was carried out and the product yield increased to 66.6 %, which was the highest reported 7α,15α-diOH-DHEA production in 5-L stirred fermenter. Meanwhile, the conversion course was shortened to 44 h. This strategy would provide a possible way in enhancing the 7α,15α-diOH-DHEA yield in pharmaceutical industry. PMID:27094679
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-01-01
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle’s position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971
Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming
2016-01-01
Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle's position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971
Combinatorial optimization methods for disassembly line balancing
NASA Astrophysics Data System (ADS)
McGovern, Seamus M.; Gupta, Surendra M.
2004-12-01
Disassembly takes place in remanufacturing, recycling, and disposal with a line being the best choice for automation. The disassembly line balancing problem seeks a sequence which: minimizes workstations, ensures similar idle times, and is feasible. Finding the optimal balance is computationally intensive due to factorial growth. Combinatorial optimization methods hold promise for providing solutions to the disassembly line balancing problem, which is proven to belong to the class of NP-complete problems. Ant colony optimization, genetic algorithm, and H-K metaheuristics are presented and compared along with a greedy/hill-climbing heuristic hybrid. A numerical study is performed to illustrate the implementation and compare performance. Conclusions drawn include the consistent generation of optimal or near-optimal solutions, the ability to preserve precedence, the speed of the techniques, and their practicality due to ease of implementation.
Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.
2010-08-18
The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.
Multi-step contrast sensitivity gauge
Quintana, Enrico C; Thompson, Kyle R; Moore, David G; Heister, Jack D; Poland, Richard W; Ellegood, John P; Hodges, George K; Prindville, James E
2014-10-14
An X-ray contrast sensitivity gauge is described herein. The contrast sensitivity gauge comprises a plurality of steps of varying thicknesses. Each step in the gauge includes a plurality of recesses of differing depths, wherein the depths are a function of the thickness of their respective step. An X-ray image of the gauge is analyzed to determine a contrast-to-noise ratio of a detector employed to generate the image.
Smusz, Sabina; Mordalski, Stefan; Witek, Jagna; Rataj, Krzysztof; Kafel, Rafał; Bojarski, Andrzej J
2015-04-27
Molecular docking, despite its undeniable usefulness in computer-aided drug design protocols and the increasing sophistication of tools used in the prediction of ligand-protein interaction energies, is still connected with a problem of effective results analysis. In this study, a novel protocol for the automatic evaluation of numerous docking results is presented, being a combination of Structural Interaction Fingerprints and Spectrophores descriptors, machine-learning techniques, and multi-step results analysis. Such an approach takes into consideration the performance of a particular learning algorithm (five machine learning methods were applied), the performance of the docking algorithm itself, the variety of conformations returned from the docking experiment, and the receptor structure (homology models were constructed on five different templates). Evaluation using compounds active toward 5-HT6 and 5-HT7 receptors, as well as additional analysis carried out for beta-2 adrenergic receptor ligands, proved that the methodology is a viable tool for supporting virtual screening protocols, enabling proper discrimination between active and inactive compounds. PMID:25806997
Optimal interdiction of unreactive Markovian evaders
Hagberg, Aric; Pan, Feng; Gutfraind, Alex
2009-01-01
The interdiction problem arises in a variety of areas including military logistics, infectious disease control, and counter-terrorism. In the typical formulation of network interdiction. the task of the interdictor is to find a set of edges in a weighted network such that the removal of those edges would increase the cost to an evader of traveling on a path through the network. Our work is motivated by cases in which the evader has incomplete information about the network or lacks planning time or computational power, e.g. when authorities set up roadblocks to catch bank robbers, the criminals do not know all the roadblock locations or the best path to use for their escape. We introduce a model of network interdiction in which the motion of one or more evaders is described by Markov processes on a network and the evaders are assumed not to react to interdiction decisions. The interdiction objective is to find a node or set. of size at most B, that maximizes the probability of capturing the evaders. We prove that similar to the classical formulation this interdiction problem is NP-hard. But unlike the classical problem our interdiction problem is submodular and the optimal solution can be approximated within 1-lie using a greedy algorithm. Additionally. we exploit submodularity to introduce a priority evaluation strategy that speeds up the greedy algorithm by orders of magnitude. Taken together the results bring closer the goal of finding realistic solutions to the interdiction problem on global-scale networks.
Extremal Optimization for p-Spin Models
NASA Astrophysics Data System (ADS)
Falkner, Stefan; Boettcher, Stefan
2012-02-01
It was shown recently that finding ground states in the 3-spin model on a 2d dimensional triangular lattice poses an NP-hard problem [1]. We use the extremal optimization (EO) heuristic [2] to explore ground state energies and finite-size scaling corrections [3]. EO predicts the thermodynamic ground state energy with high accuracy, based on the observation that finite size corrections appear to decay purely with system size. Just as found in 3-spin models on r-regular graphs, there are no noticeable anomalous corrections to these energies. Interestingly, the results are sufficiently accurate to detect alternating patters in the energies when the lattice size L is divisible by 6. Although ground states seem very prolific and might seem easy to obtain with simple greedy algorithms, our tests show significant improvement in the data with EO. [4pt] [1] PRE 83 (2011) 046709,[2] PRL 86 (2001) 5211,[3] S. Boettcher and S. Falkner (in preparation).
Image-driven mesh optimization
Lindstrom, P; Turk, G
2001-01-05
We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Methods for optimizing the structure alphabet sequences of proteins.
Dong, Qi-wen; Wang, Xiao-long; Lin, Lei
2007-11-01
Protein structure prediction based on fragment assemble has made great progress in recent years. Local protein structure prediction is receiving increased attention. One essential step of local protein structure prediction method is that the three-dimensional conformations must be compressed into one-dimensional series of letters of a structural alphabet. The traditional method assigns each structure fragment the structure alphabet that has the best local structure similarity. However, such locally optimal structure alphabet sequence does not guarantee to produce the globally optimal structure. This study presents two efficient methods trying to find the optimal structure alphabet sequence, which can model the native structures as accuracy as possible. First, a 28-letter structure alphabet is derived by clustering fragment in Cartesian space with fragment length of seven residues. The average quantization error of the 28 letters is 0.82 A in term of root mean square deviation. Then, two efficient methods are presented to encode the protein structures into series of structure alphabet letters, that is, the greedy and dynamic programming algorithm. They are tested on PDB database using the structure alphabet developed in Cartesian coordinates space (our structure alphabet) and in torsion angles space (the PB structure alphabet), respectively. The experimental results show that these two methods can find the approximately optimal structure alphabet sequences by searching a small fraction of the modeling space. The traditional local-optimization method achieves 26.27 A root mean square deviations between the reconstructed structures and the native one, while the modeling accuracy is improved to 3.28 A by the greedy algorithm. The results are helpful for local protein structure prediction. PMID:17493604
NASA Astrophysics Data System (ADS)
Brochero, D.; Anctil, F.; Gagné, C.
2012-04-01
In input selection (or feature selection), modellers are interested in identifying k of the d dimensions that provide the most information. In hydrology, this problem is particularly relevant when dealing with temporally and spatially distributed data such as radar rainfall estimates or meteorological ensemble forecasts. The most common approaches for input determination of artifitial neural networks (ANN) in water resources are cross-correlation, heuristics, embedding window analysis (chaos theory), and sensitivity analyses. We resorted here to Forward Greedy Selection (FGS), a sensitivity analysis, for identifying the inputs that maximize the performance of ANN forecasting. It consists of a pool of ANNs with different structures, initial weights, and training data subsets. The stacked ANN model was setup through the joint use of stop training and a special type of boosting for regression known as AdaBoost.RT. Several ANN are then used in series, each one exploiting, with incremental probability, data with relative estimation error higher than a pre-set threshold value. The global estimate is then obtained from the aggregation of the estimates of the models (here the median value). Two schemes are compared here, which differ in their input type. The first scheme looks at lagged radar rainfall estimates averaged over entire catchment (the average scenario), while the second scheme deals with the spatial variation fields of the radar rainfall estimates (the distributed scenario). Results lead to three major findings. First, stacked ANN response outperforms the best single ANN (in the same way as many others reports). Second, a positive gain in the test subset of around 20%, when compared to the average scenario, is observed in the distributed scenario. However, the most important result from the selecting process is the final structure of the inputs, for the distributed scenario clearly outlines the areas with the greatest impact on forecasting in terms of the
Optimal transport on supply-demand networks
NASA Astrophysics Data System (ADS)
Chen, Yu-Han; Wang, Bing-Hong; Zhao, Li-Chao; Zhou, Changsong; Zhou, Tao
2010-06-01
In the literature, transport networks are usually treated as homogeneous networks, that is, every node has the same function, simultaneously providing and requiring resources. However, some real networks, such as power grids and supply chain networks, show a far different scenario in which nodes are classified into two categories: supply nodes provide some kinds of services, while demand nodes require them. In this paper, we propose a general transport model for these supply-demand networks, associated with a criterion to quantify their transport capacities. In a supply-demand network with heterogeneous degree distribution, its transport capacity strongly depends on the locations of supply nodes. We therefore design a simulated annealing algorithm to find the near optimal configuration of supply nodes, which remarkably enhances the transport capacity compared with a random configuration and outperforms the degree target algorithm, the betweenness target algorithm, and the greedy method. This work provides a start point for systematically analyzing and optimizing transport dynamics on supply-demand networks.
Improving IMRT-plan quality with MLC leaf position refinement post plan optimization
Niu, Ying; Zhang, Guowei; Berman, Barry L.; Parke, William C.; Yi, Byongyong; Yu, Cedric X.
2012-01-01
Purpose: In intensity-modulated radiation therapy (IMRT) planning, reducing the pencil-beam size may lead to a significant improvement in dose conformity, but also increase the time needed for the dose calculation and plan optimization. The authors develop and evaluate a postoptimization refinement (POpR) method, which makes fine adjustments to the multileaf collimator (MLC) leaf positions after plan optimization, enhancing the spatial precision and improving the plan quality without a significant impact on the computational burden. Methods: The authors’ POpR method is implemented using a commercial treatment planning system based on direct aperture optimization. After an IMRT plan is optimized using pencil beams with regular pencil-beam step size, a greedy search is conducted by looping through all of the involved MLC leaves to see if moving the MLC leaf in or out by half of a pencil-beam step size will improve the objective function value. The half-sized pencil beams, which are used for updating dose distribution in the greedy search, are derived from the existing full-sized pencil beams without need for further pencil-beam dose calculations. A benchmark phantom case and a head-and-neck (HN) case are studied for testing the authors’ POpR method. Results: Using a benchmark phantom and a HN case, the authors have verified that their POpR method can be an efficient technique in the IMRT planning process. Effectiveness of POpR is confirmed by noting significant improvements in objective function values. Dosimetric benefits of POpR are comparable to those of using a finer pencil-beam size from the optimization start, but with far less computation and time. Conclusions: The POpR is a feasible and practical method to significantly improve IMRT-plan quality without compromising the planning efficiency. PMID:22894437
An Automated, Multi-Step Monte Carlo Burnup Code System.
TRELLUE, HOLLY R.
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2, the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.
Information processing in multi-step signaling pathways
NASA Astrophysics Data System (ADS)
Ganesan, Ambhi; Hamidzadeh, Archer; Zhang, Jin; Levchenko, Andre
Information processing in complex signaling networks is limited by a high degree of variability in the abundance and activity of biochemical reactions (biological noise) operating in living cells. In this context, it is particularly surprising that many signaling pathways found in eukaryotic cells are composed of long chains of biochemical reactions, which are expected to be subject to accumulating noise and delayed signal processing. Here, we challenge the notion that signaling pathways are insulated chains, and rather view them as parts of extensively branched networks, which can benefit from a low degree of interference between signaling components. We further establish conditions under which this pathway organization would limit noise accumulation, and provide evidence for this type of signal processing in an experimental model of a calcium-activated MAPK cascade. These results address the long-standing problem of diverse organization and structure of signaling networks in live cells.
A variable multi-step method for transient heat conduction
NASA Technical Reports Server (NTRS)
Smolinski, Patrick
1991-01-01
A variable explicit time integration algorithm is developed for unsteady diffusion problems. The algorithm uses nodal partitioning and allows the nodal groups to be updated with different time steps. The stability of the algorithm is analyzed using energy methods and critical time steps are found in terms of element eigenvalues with no restrictions on element types. Several numerical examples are given to illustrate the accuracy of the method.
Multi-step heater deployment in a subsurface formation
Mason, Stanley Leroy
2012-04-03
A method for installing a horizontal or inclined subsurface heater includes placing a heating section of a heater in a horizontal or inclined section of a wellbore with an installation tool. The tool is uncoupled from the heating section. A lead in section is mechanically and electrically coupled to the heating section of the heater. The lead-in section is located in an angled or vertical section of the wellbore.
An Automated, Multi-Step Monte Carlo Burnup Code System.
Energy Science and Technology Software Center (ESTSC)
2003-07-14
Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2,more » the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.« less
48 CFR 15.202 - Advisory multi-step process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... offerors to submit information that allows the Government to advise the offerors about their potential to... concept, past performance, and limited pricing information). At a minimum, the notice shall contain..., notwithstanding the advice provided by the Government in response to their submissions, they may participate...
Peloid Mud: a multi-step maturation analysis
NASA Astrophysics Data System (ADS)
Redolfi, M.
2013-12-01
The aim of this work is understanding the process involved in the maturation of artificial peloid mud commonly use in thermal spa. I prepare a standard protocol for analysis: XRD, chemical, heat capacity and heavy metal sequential extraction . I also prepare 12 artificial peloid mud following the procedure describe in Veniale et al. (2004) mixing natural thermal waters form the Lazio region with a common clay also collected in Lazio Region with a ration of 1:1 in weight and put this mud in a sealed box at 40 °C for all the maturation process without remixing. Each peloid mud was sampled at one, three and six month of maturation, dried at 60 °C degree and milled for analysis. Each mud was compared with the result at one, three and six month to identify the major different of parameters at different time of maturation.
A global optimization paradigm based on change of measures.
Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan
2015-07-01
A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as 'scrambling' and 'selection'. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268
A global optimization paradigm based on change of measures
Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan
2015-01-01
A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268
Optimal stimulus scheduling for active estimation of evoked brain networks
NASA Astrophysics Data System (ADS)
Kafashan, MohammadMehdi; Ching, ShiNung
2015-12-01
Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.
Optimizing spread dynamics on graphs by message passing
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.
2013-09-01
Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).
Guthier, C V; Aschenbrenner, K P; Müller, R; Polster, L; Cormack, R A; Hesser, J W
2016-08-21
This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p < 0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures. PMID:27435044
NASA Astrophysics Data System (ADS)
Guthier, C. V.; Aschenbrenner, K. P.; Müller, R.; Polster, L.; Cormack, R. A.; Hesser, J. W.
2016-08-01
This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56–350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p < 0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures.
MEC--a near-optimal online reinforcement learning algorithm for continuous deterministic systems.
Zhao, Dongbin; Zhu, Yuanheng
2015-02-01
In this paper, the first probably approximately correct (PAC) algorithm for continuous deterministic systems without relying on any system dynamics is proposed. It combines the state aggregation technique and the efficient exploration principle, and makes high utilization of online observed samples. We use a grid to partition the continuous state space into different cells to save samples. A near-upper Q operator is defined to produce a near-upper Q function using samples in each cell. The corresponding greedy policy effectively balances between exploration and exploitation. With the rigorous analysis, we prove that there is a polynomial time bound of executing nonoptimal actions in our algorithm. After finite steps, the final policy reaches near optimal in the framework of PAC. The implementation requires no knowledge of systems and has less computation complexity. Simulation studies confirm that it is a better performance than other similar PAC algorithms. PMID:25474812
A nested partitions framework for beam angle optimization in intensity-modulated radiation therapy.
D'Souza, Warren D; Zhang, Hao H; Nazareth, Daryl P; Shi, Leyuan; Meyer, Robert R
2008-06-21
Coupling beam angle optimization with dose optimization in intensity-modulated radiation therapy (IMRT) increases the size and complexity of an already large-scale combinatorial optimization problem. We have developed a novel algorithm, nested partitions (NP), that is capable of finding suitable beam angle sets by guiding the dose optimization process. NP is a metaheuristic that is flexible enough to guide the search of a heuristic or deterministic dose optimization algorithm. The NP method adaptively samples from the entire feasible region, or search space, and coordinates the sampling effort with a systematic partitioning of the feasible region at successive iterations, concentrating the search in promising subsets. We used a 'warm-start' approach by initiating NP with beam angle samples derived from an integer programming (IP) model. In this study, we describe our implementation of the NP framework with a commercial optimization algorithm. We compared the NP framework with equi-spaced beam angle selection, the IP method, greedy heuristic and random sampling heuristic methods. The results of the NP approach were evaluated using two clinical cases (head and neck and whole pelvis) involving the primary tumor and nodal volumes. Our results show that NP produces better quality solutions than the alternative considered methods. PMID:18523351
A nested partitions framework for beam angle optimization in intensity-modulated radiation therapy
NASA Astrophysics Data System (ADS)
D'Souza, Warren D.; Zhang, Hao H.; Nazareth, Daryl P.; Shi, Leyuan; Meyer, Robert R.
2008-06-01
Coupling beam angle optimization with dose optimization in intensity-modulated radiation therapy (IMRT) increases the size and complexity of an already large-scale combinatorial optimization problem. We have developed a novel algorithm, nested partitions (NP), that is capable of finding suitable beam angle sets by guiding the dose optimization process. NP is a metaheuristic that is flexible enough to guide the search of a heuristic or deterministic dose optimization algorithm. The NP method adaptively samples from the entire feasible region, or search space, and coordinates the sampling effort with a systematic partitioning of the feasible region at successive iterations, concentrating the search in promising subsets. We used a 'warm-start' approach by initiating NP with beam angle samples derived from an integer programming (IP) model. In this study, we describe our implementation of the NP framework with a commercial optimization algorithm. We compared the NP framework with equi-spaced beam angle selection, the IP method, greedy heuristic and random sampling heuristic methods. The results of the NP approach were evaluated using two clinical cases (head and neck and whole pelvis) involving the primary tumor and nodal volumes. Our results show that NP produces better quality solutions than the alternative considered methods.
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
NASA Astrophysics Data System (ADS)
Shaltev, M.
2016-02-01
The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.
Guo, Chengan; Yang, Qingshan
2015-07-01
Finding the optimal solution to the constrained l0 -norm minimization problems in the recovery of compressive sensed signals is an NP-hard problem and it usually requires intractable combinatorial searching operations for getting the global optimal solution, unless using other objective functions (e.g., the l1 norm or lp norm) for approximate solutions or using greedy search methods for locally optimal solutions (e.g., the orthogonal matching pursuit type algorithms). In this paper, a neurodynamic optimization method is proposed to solve the l0 -norm minimization problems for obtaining the global optimum using a recurrent neural network (RNN) model. For the RNN model, a group of modified Gaussian functions are constructed and their sum is taken as the objective function for approximating the l0 norm and for optimization. The constructed objective function sets up a convexity condition under which the neurodynamic system is guaranteed to obtain the globally convergent optimal solution. An adaptive adjustment scheme is developed for improving the performance of the optimization algorithm further. Extensive experiments are conducted to test the proposed approach in this paper and the output results validate the effectiveness of the new method. PMID:25122603
Hoffmann, Thomas J.; Zhan, Yiping; Kvale, Mark N.; Hesselson, Stephanie E.; Gollub, Jeremy; Iribarren, Carlos; Lu, Yontao; Mei, Gangwu; Purdy, Matthew M.; Quesenberry, Charles; Rowell, Sarah; Shapero, Michael H.; Smethurst, David; Somkin, Carol P.; Van den Eeden, Stephen K.; Walter, Larry; Webster, Teresa; Whitmer, Rachel A.; Finn, Andrea; Schaefer, Catherine; Kwok, Pui-Yan; Risch, Neil
2012-01-01
Four custom Axiom genotyping arrays were designed for a genome-wide association (GWA) study of 100,000 participants from the Kaiser Permanente Research Program on Genes, Environment and Health. The array optimized for individuals of European race/ethnicity was previously described. Here we detail the development of three additional microarrays optimized for individuals of East Asian, African American, and Latino race/ethnicity. For these arrays, we decreased redundancy of high-performing SNPs to increase SNP capacity. The East Asian array was designed using greedy pairwise SNP selection. However, removing SNPs from the target set based on imputation coverage is more efficient than pairwise tagging. Therefore, we developed a novel hybrid SNP selection method for the African American and Latino arrays utilizing rounds of greedy pairwise SNP selection, followed by removal from the target set of SNPs covered by imputation. The arrays provide excellent genome-wide coverage and are valuable additions for large-scale GWA studies. PMID:21903159
NASA Astrophysics Data System (ADS)
Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W.; Tsapatsis, Michael; Siepmann, J. Ilja
2015-01-01
Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.
Extraction of Optimal Spectral Bands Using Hierarchical Band Merging Out of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Le Bris, A.; Chehata, N.; Briottet, X.; Paparoditis, N.
2015-08-01
Spectral optimization consists in identifying the most relevant band subset for a specific application. It is a way to reduce hyperspectral data huge dimensionality and can be applied to design specific superspectral sensors dedicated to specific land cover applications. Spectral optimization includes both band selection and band extraction. On the one hand, band selection aims at selecting an optimal band subset (according to a relevance criterion) among the bands of a hyperspectral data set, using automatic feature selection algorithms. On the other hand, band extraction defines the most relevant spectral bands optimizing both their position along the spectrum and their width. The approach presented in this paper first builds a hierarchy of groups of adjacent bands, according to a relevance criterion to decide which adjacent bands must be merged. Then, band selection is performed at the different levels of this hierarchy. Two approaches were proposed to achieve this task : a greedy one and a new adaptation of an incremental feature selection algorithm to this hierarchy of merged bands.
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
Inversion and fast optimization using computational intelligence with applications to geoacoustics
NASA Astrophysics Data System (ADS)
Thompson, Benjamin Berry
With a sufficiently complex underwater acoustic model, one may produce an arbitrarily accurate reconstruction of acoustic energy propagation in any specified underwater environment. Problems arise, however, when these acoustic emulations are required in a timely manner. When many realizations of the acoustic model are required over a short period of time, model complexity prohibits any kind of fast execution of such an algorithm. Two approaches may be applied to increasing the speed of any such iterative technique: first, one may attempt to simplify or speed up the model. Second, one may attempt to reduce the number of times the complex model must be executed. In this dissertation, we take both approaches for two distinct, unsolved problems in the area of geoacoustics: inversion of acoustic models for bottom parameter acquisition, and sonobuoy placement for optimal sonar coverage of a desired area, and we will see both may be phrased as optimization problems. The primary focus of this paper, however, is specifically on the use of computational intelligence to increase the execution time of these optimization algorithms, including a very remarkable greedy algorithm for the placement of sonobuoys, which executes in time orders of magnitude lower than with direct optimization techniques.
General optimization technique for high-quality community detection in complex networks
NASA Astrophysics Data System (ADS)
Sobolevsky, Stanislav; Campari, Riccardo; Belyi, Alexander; Ratti, Carlo
2014-07-01
Recent years have witnessed the development of a large body of algorithms for community detection in complex networks. Most of them are based upon the optimization of objective functions, among which modularity is the most common, though a number of alternatives have been suggested in the scientific literature. We present here an effective general search strategy for the optimization of various objective functions for community detection purposes. When applied to modularity, on both real-world and synthetic networks, our search strategy substantially outperforms the best existing algorithms in terms of final scores of the objective function. In terms of execution time for modularity optimization this approach also outperforms most of the alternatives present in literature with the exception of fastest but usually less efficient greedy algorithms. The networks of up to 30000 nodes can be analyzed in time spans ranging from minutes to a few hours on average workstations, making our approach readily applicable to tasks not limited by strict time constraints but requiring the quality of partitioning to be as high as possible. Some examples are presented in order to demonstrate how this quality could be affected by even relatively small changes in the modularity score stressing the importance of optimization accuracy.
Carver, Charles S.; Scheier, Michael F.
2014-01-01
Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971
Large-scale optimal sensor array management for target tracking
NASA Astrophysics Data System (ADS)
Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.
2004-01-01
Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.
Large-scale optimal sensor array management for target tracking
NASA Astrophysics Data System (ADS)
Tharmarasa, Ratnasingham; Kirubarajan, Thiagalingam; Hernandez, Marcel L.
2003-12-01
Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.
Selecting training inputs via greedy rank covering
Buchsbaum, A.L.; Santen, J.P.H. van
1996-12-31
We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
Practical optimization of Steiner trees via the cavity method
NASA Astrophysics Data System (ADS)
Braunstein, Alfredo; Muntoni, Anna
2016-07-01
The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.
Wen-Chiao Lin; Humberto E. Garcia; Tae-Sic Yoo
2011-06-01
Diagnosers for keeping track on the occurrences of special events in the framework of unreliable partially observed discrete-event dynamical systems were developed in previous work. This paper considers observation platforms consisting of sensors that provide partial and unreliable observations and of diagnosers that analyze them. Diagnosers in observation platforms typically perform better as sensors providing the observations become more costly or increase in number. This paper proposes a methodology for finding an observation platform that achieves an optimal balance between cost and performance, while satisfying given observability requirements and constraints. Since this problem is generally computational hard in the framework considered, an observation platform optimization algorithm is utilized that uses two greedy heuristics, one myopic and another based on projected performances. These heuristics are sequentially executed in order to find best observation platforms. The developed algorithm is then applied to an observation platform optimization problem for a multi-unit-operation system. Results show that improved observation platforms can be found that may significantly reduce the observation platform cost but still yield acceptable performance for correctly inferring the occurrences of special events.
A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments
NASA Technical Reports Server (NTRS)
McDowell, Mark
2008-01-01
An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent
Sejnowski, Terrence J.; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J.
2014-01-01
Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications. PMID:25328167
Lee, John R.
1975-01-01
Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041
Tyteca, Eva; Vanderlinden, Kim; Favier, Maxime; Clicq, David; Cabooter, Deirdre; Desmet, Gert
2014-09-01
Linear gradient programs are very frequently used in reversed phase liquid chromatography to enhance the selectivity compared to isocratic separations. Multi-linear gradient programs on the other hand are only scarcely used, despite their intrinsically larger separation power. Because the gradient-conformity of the latest generation of instruments has greatly improved, a renewed interest in more complex multi-segment gradient liquid chromatography can be expected in the future, raising the need for better performing gradient design algorithms. We explored the possibilities of a new type of multi-segment gradient optimization algorithm, the so-called "one-segment-per-group-of-components" optimization strategy. In this gradient design strategy, the slope is adjusted after the elution of each individual component of the sample, letting the retention properties of the different analytes auto-guide the course of the gradient profile. Applying this method experimentally to four randomly selected test samples, the separation time could on average be reduced with about 40% compared to the best single linear gradient. Moreover, the newly proposed approach performed equally well or better than the multi-segment optimization mode of a commercial software package. Carrying out an extensive in silico study, the experimentally observed advantage could also be generalized over a statistically significant amount of different 10 and 20 component samples. In addition, the newly proposed gradient optimization approach enables much faster searches than the traditional multi-step gradient design methods. PMID:25039066
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian
1988-01-01
The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.
Fixed-Point Optimization of Atoms and Density in DFT.
Marks, L D
2013-06-11
I describe an algorithm for simultaneous fixed-point optimization (mixing) of the density and atomic positions in Density Functional Theory calculations which is approximately twice as fast as conventional methods, is robust, and requires minimal to no user intervention or input. The underlying numerical algorithm differs from ones previously proposed in a number of aspects and is an autoadaptive hybrid of standard Broyden methods. To understand how the algorithm works in terms of the underlying quantum mechanics, the concept of algorithmic greed for different Broyden methods is introduced, leading to the conclusion that if a linear model holds that the first Broyden method is optimal, the second if a linear model is a poor approximation. How this relates to the algorithm is discussed in terms of electronic phase transitions during a self-consistent run which results in discontinuous changes in the Jacobian. This leads to the need for a nongreedy algorithm when the charge density crosses phase boundaries, as well as a greedy algorithm within a given phase. An ansatz for selecting the algorithm structure is introduced based upon requiring the extrapolated component of the curvature condition to have projected positive eigenvalues. The general convergence of the fixed-point methods is briefly discussed in terms of the dielectric response and elastic waves using known results for quasi-Newton methods. The analysis indicates that both should show sublinear dependence with system size, depending more upon the number of different chemical environments than upon the number of atoms, consistent with the performance of the algorithm and prior literature. This is followed by details of algorithm ranging from preconditioning to trust region control. A number of results are shown, finishing up with a discussion of some of the many open questions. PMID:26583869
NASA Astrophysics Data System (ADS)
Mugunthan, P.; Shoemaker, C. A.; Regis, R. G.
2003-12-01
Heuristics and function approximation optimization methods were applied in calibrating biological and biokinetic parameters for a computationally expensive groundwater bioremediation model for engineered reductive dechlorination of chlorinated ethenes. Multi-species groundwater bioremediation models that use monod type kinetics are often not amenable to traditional derivative based optimization due to stiff biokinetic equations. The performance of three heuristic methods, Stochastic Greedy Search (GS), Real Genetic Algorithm (RGA), Derandomized Evolution Strategy (DES), and, Function Approximation Optimization based on Radial Basis Function (FA-RBF) were compared on three-dimensional hypothetical and field problems. GS was implemented so as to perform a more global search. Optimization results on hypothetical problem indicated that FA-RBF performed statistically significantly better than heuristic based evolutionary algorithms at a 10% significance level. Further, this particular implementation of GS performed well and proved superior to RGA. These heuristic methods and FA-RBF, with the exception of RGA, were applied to calibrate biological and biokinetic parameters using treatability test data for enhanced bioremediation at a Naval Air Station in Alameda Point, CA. All three methods performed well and identified similar solutions. The approximate simulation times for the hypothetical and real problems were 7 min and 2.5 hours respectively. Calibration of such computationally expensive models by heuristic and function approximation methods appears promising.
Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management. PMID:25538868
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Sparse and optimal acquisition design for diffusion MRI and beyond
Koay, Cheng Guan; Özarslan, Evren; Johnson, Kevin M.; Meyerand, M. Elizabeth
2012-01-01
Purpose: Diffusion magnetic resonance imaging (MRI) in combination with functional MRI promises a whole new vista for scientists to investigate noninvasively the structural and functional connectivity of the human brain—the human connectome, which had heretofore been out of reach. As with other imaging modalities, diffusion MRI data are inherently noisy and its acquisition time-consuming. Further, a faithful representation of the human connectome that can serve as a predictive model requires a robust and accurate data-analytic pipeline. The focus of this paper is on one of the key segments of this pipeline—in particular, the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. Methods: The authors propose a novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and a novel and effective semistochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. The goal of the optimality criteria is threefold: first, to maximize uniformity of the diffusion measurements in each shell, which is equivalent to maximal incoherence in angular measurements; second, to maximize coverage of the diffusion measurements around each radial line to achieve maximal incoherence in radial measurements for multiple-shell acquisition; and finally, to ensure maximum uniformity of diffusion measurement directions in the limiting case when all the shells are coincidental as in the case of a single-shell acquisition. The approach taken in evaluating the stability of various acquisition designs is based on the condition number and the A-optimal measure of the design matrix. Results: Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general—e.g., in the order of 10232 for a set of 144 diffusion gradient directions, the proposed search
Finding Near-Optimal Groups of Epidemic Spreaders in a Complex Network
Moores, Geoffrey; Shakarian, Paulo; Macdonald, Brian; Howard, Nicholas
2014-01-01
In this paper, we present algorithms to find near-optimal sets of epidemic spreaders in complex networks. We extend the notion of local-centrality, a centrality measure previously shown to correspond with a node's ability to spread an epidemic, to sets of nodes by introducing combinatorial local centrality. Though we prove that finding a set of nodes that maximizes this new measure is NP-hard, good approximations are available. We show that a strictly greedy approach obtains the best approximation ratio unless P = NP and then formulate a modified version of this approach that leverages qualities of the network to achieve a faster runtime while maintaining this theoretical guarantee. We perform an experimental evaluation on samples from several different network structures which demonstrate that our algorithm maximizes combinatorial local centrality and consistently chooses the most effective set of nodes to spread infection under the SIR model, relative to selecting the top nodes using many common centrality measures. We also demonstrate that the optimized algorithm we develop scales effectively. PMID:24694693
Adaptive tracking and compensation of laser spot based on ant colony optimization
NASA Astrophysics Data System (ADS)
Yang, Lihong; Ke, Xizheng; Bai, Runbing; Hu, Qidi
2009-05-01
Because the effect of atmospheric scattering and atmospheric turbulence on laser signal of atmospheric absorption,laser spot twinkling, beam drift and spot split-up occur ,when laser signal transmits in the atmospheric channel. The phenomenon will be seriously affects the stability and the reliability of laser spot receiving system. In order to reduce the influence of atmospheric turbulence, we adopt optimum control thoughts in the field of artificial intelligence, propose a novel adaptive optical control technology-- model-free optimized adaptive control technology, analyze low-order pattern wave-front error theory, in which an -adaptive optical system is employed to adjust errors, and design its adaptive structure system. Ant colony algorithm is the control core algorithm, which is characteristic of positive feedback, distributed computing and greedy heuristic search. . The ant colony algorithm optimization of adaptive optical phase compensation is simulated. Simulation result shows that, the algorithm can effectively control laser energy distribution, improve laser light beam quality, and enhance signal-to-noise ratio of received signal.
NASA Astrophysics Data System (ADS)
Handels, Heinz; Ross, Th; Kreusch, J.; Wolff, H. H.; Poeppl, S. J.
1998-06-01
A new approach to computer supported recognition of melanoma and naevocytic naevi based on high resolution skin surface profiles is presented. Profiles are generated by sampling an area of 4 X 4 mm2 at a resolution of 125 sample points per mm with a laser profilometer at a vertical resolution of 0.1 micrometers . With image analysis algorithms Haralick's texture parameters, Fourier features and features based on fractal analysis are extracted. In order to improve classification performance, a subsequent feature selection process is applied to determine the best possible subset of features. Genetic algorithms are optimized for the feature selection process, and results of different approaches are compared. As quality measure for feature subsets, the error rate of the nearest neighbor classifier estimated with the leaving-one-out method is used. In comparison to heuristic strategies and greedy algorithms, genetic algorithms show the best results for the feature selection problem. After feature selection, several architectures of feed forward neural networks with error back-propagation are evaluated. Classification performance of the neural classifier is optimized using different topologies, learning parameters and pruning algorithms. The best neural classifier achieved an error rate of 4.5% and was found after network pruning. The best result in all with an error rate of 2.3% was obtained with the nearest neighbor classifier.
Tippetts, Tyler J; Warner, Phillip B; Kukhareva, Polina V; Shields, David E; Staes, Catherine J; Kawamoto, Kensaku
2015-01-01
Given the close relationship between clinical decision support (CDS) and quality measurement (QM), it has been proposed that a standards-based CDS Web service could be leveraged to enable QM. Benefits of such a CDS-QM framework include semantic consistency and implementation efficiency. However, earlier research has identified execution performance as a critical barrier when CDS-QM is applied to large populations. Here, we describe challenges encountered and solutions devised to optimize CDS-QM execution performance. Through these optimizations, the CDS-QM execution time was optimized approximately three orders of magnitude, such that approximately 370,000 patient records can now be evaluated for 22 quality measure groups in less than 5 hours (approximately 2 milliseconds per measure group per patient). Several key optimization methods were identified, with the most impact achieved through population-based retrieval of relevant data, multi-step data staging, and parallel processing. These optimizations have enabled CDS-QM to be operationally deployed at an enterprise level. PMID:26958259
NASA Technical Reports Server (NTRS)
Patterson, Michael J.; Mohajeri, Kayhan
1991-01-01
The preliminary results of a test program to optimize a neutralizer design for 30 cm xenon ion thrusters are discussed. The impact of neutralizer geometry, neutralizer axial location, and local magnetic fields on neutralizer performance is discussed. The effect of neutralizer performance on overall thruster performance is quantified, for thruster operation in the 0.5-3.2 kW power range. Additionally, these data are compared to data published for other north-south stationkeeping (NSSK) and primary propulsion xenon ion thruster neutralizers.
Sklarz, Shlomo E.; Tannor, David J.; Khaneja, Navin
2004-05-01
We study the problem of optimal control of dissipative quantum dynamics. Although under most circumstances dissipation leads to an increase in entropy (or a decrease in purity) of the system, there is an important class of problems for which dissipation with external control can decrease the entropy (or increase the purity) of the system. An important example is laser cooling. In such systems, there is an interplay of the Hamiltonian part of the dynamics, which is controllable, and the dissipative part of the dynamics, which is uncontrollable. The strategy is to control the Hamiltonian portion of the evolution in such a way that the dissipation causes the purity of the system to increase rather than decrease. The goal of this paper is to find the strategy that leads to maximal purity at the final time. Under the assumption that Hamiltonian control is complete and arbitrarily fast, we provide a general framework by which to calculate optimal cooling strategies. These assumptions lead to a great simplification, in which the control problem can be reformulated in terms of the spectrum of eigenvalues of {rho}, rather than {rho} itself. By combining this formulation with the Hamilton-Jacobi-Bellman theorem we are able to obtain an equation for the globally optimal cooling strategy in terms of the spectrum of the density matrix. For the three-level {lambda} system, we provide a complete analytic solution for the optimal cooling strategy. For this system it is found that the optimal strategy does not exploit system coherences and is a 'greedy' strategy, in which the purity is increased maximally at each instant.
[SIAM conference on optimization
Not Available
1992-05-10
Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
NASA Astrophysics Data System (ADS)
Allahverdyan, Armen E.; Hovhannisyan, Karen; Mahler, Guenter
2010-05-01
We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures Th and Tc , respectively (θ≡Tc/Th<1) . The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by ζCA=(1)/(1-θ)-1 (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency ζC=(1)/(1-θ)-1 . The lower bound is reached in the equilibrium limit θ→1 . The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for lnn≫1 . If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζCA and converges to it for n≫1 .
Allahverdyan, Armen E; Hovhannisyan, Karen; Mahler, Guenter
2010-05-01
We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures T h and T c, respectively (θ ≡ T c/T h < 1). The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by [formula: see text] (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency [formula: see text]. The lower bound is reached in the equilibrium limit θ → 1. The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for ln n > 1. If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζ CA and converges to it for n > 1. PMID:20866207
AMMOS: Automated Molecular Mechanics Optimization tool for in silico Screening
Pencheva, Tania; Lagorce, David; Pajeva, Ilza; Villoutreix, Bruno O; Miteva, Maria A
2008-01-01
Background Virtual or in silico ligand screening combined with other computational methods is one of the most promising methods to search for new lead compounds, thereby greatly assisting the drug discovery process. Despite considerable progresses made in virtual screening methodologies, available computer programs do not easily address problems such as: structural optimization of compounds in a screening library, receptor flexibility/induced-fit, and accurate prediction of protein-ligand interactions. It has been shown that structural optimization of chemical compounds and that post-docking optimization in multi-step structure-based virtual screening approaches help to further improve the overall efficiency of the methods. To address some of these points, we developed the program AMMOS for refining both, the 3D structures of the small molecules present in chemical libraries and the predicted receptor-ligand complexes through allowing partial to full atom flexibility through molecular mechanics optimization. Results The program AMMOS carries out an automatic procedure that allows for the structural refinement of compound collections and energy minimization of protein-ligand complexes using the open source program AMMP. The performance of our package was evaluated by comparing the structures of small chemical entities minimized by AMMOS with those minimized with the Tripos and MMFF94s force fields. Next, AMMOS was used for full flexible minimization of protein-ligands complexes obtained from a mutli-step virtual screening. Enrichment studies of the selected pre-docked complexes containing 60% of the initially added inhibitors were carried out with or without final AMMOS minimization on two protein targets having different binding pocket properties. AMMOS was able to improve the enrichment after the pre-docking stage with 40 to 60% of the initially added active compounds found in the top 3% to 5% of the entire compound collection. Conclusion The open source AMMOS
Optimization and scale-up of a fluid bed tangential spray rotogranulation process.
Bouffard, J; Dumont, H; Bertrand, F; Legros, R
2007-04-20
The production of pellets in the pharmaceutical industry generally involves multi-step processing: (1) mixing, (2) wet granulation, (3) spheronization and (4) drying. While extrusion-spheronization processes have been popular because of their simplicity, fluid-bed rotogranulation (FBRG) is now being considered as an alternative, since it offers the advantages of combining the different steps into one processing unit, thus reducing processing time and material handling. This work aimed at the development of a FBRG process for the production of pellets in a 4.5-l Glatt GCPG1 tangential spray rotoprocessor and its optimization using factorial design. The factors considered were: (1) rotor disc velocity, (2) gap air pressure, (3) air flow rate, (4) binder spray rate and (5) atomization pressure. The pellets were characterized for their physical properties by measuring size distribution, roundness and flow properties. The results indicated that: pellet mean particle size is negatively affected by air flow rate and rotor plate speed, while binder spray rate has a positive effect on size; pellet flow properties are enhanced by operating with increased air flow rate and worsened with increased binder spray rate. Multiple regression analysis enabled the identification of an optimal operating window for production of acceptable pellets. Scale-up of these operating conditions was tested in a 30-l Glatt GPCG15 FBRG. PMID:17166677
Optimization of propranolol HCl release kinetics from press coated sustained release tablets.
Ali, Adel Ahmed; Ali, Ahmed Mahmoud
2013-01-01
Press-coated sustained release tablets offer a valuable, cheap and easy manufacture alternative to the highly expensive, multi-step manufacture and filling of coated beads. In this study, propranolol HCl press-coated tablets were prepared using hydroxylpropylmethylcellulose (HPMC) as tablet coating material together with carbopol 971P and compressol as release modifiers. The prepared formulations were optimized for zero-order release using artificial neural network program (INForm, Intelligensys Ltd, North Yorkshire, UK). Typical zero-order release kinetics with extended release profile for more than 12 h was obtained. The most important variables considered by the program in optimizing formulations were type and proportion of polymer mixture in the coat layer and distribution ratio of drug between core and coat. The key elements found were; incorporation of 31-38 % of the drug in the coat, fixing the amount of polymer in coat to be not less than 50 % of coat layer. Optimum zero-order release kinetics (linear regression r2 = 0.997 and Peppas model n value > 0.80) were obtained when 2.5-10 % carbopol and 25-42.5% compressol were incorporated into the 50 % HPMC coat layer. PMID:22582904
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of
Temporal variability of the optimal monitoring setup assessed using information theory
NASA Astrophysics Data System (ADS)
Fahle, Marcus; Hohenbrink, Tobias L.; Dietrich, Ottfried; Lischeid, Gunnar
2015-09-01
Hydrology is rich in methods that use information theory to evaluate monitoring networks. Yet in most existing studies, only the available data set as a whole is used, which neglects the intraannual variability of the hydrological system. In this paper, we demonstrate how this variability can be considered by extending monitoring evaluation to subsets of the available data. Therefore, we separately evaluated time windows of fixed length, which were shifted through the data set, and successively extended time windows. We used basic information theory measures and a greedy ranking algorithm based on the criterion of maximum information/minimum redundancy. The network investigated monitored surface and groundwater levels at quarter-hourly intervals and was located at an artificially drained lowland site in the Spreewald region in north-east Germany. The results revealed that some of the monitoring stations were of value permanently while others were needed only temporally. The prevailing meteorological conditions, particularly the amount of precipitation, affected the degree of similarity between the water levels measured. The hydrological system tended to act more individually during periods of no or little rainfall. The optimal monitoring setup, its stability, and the monitoring effort necessary were influenced by the meteorological forcing. Altogether, the methodology presented can help achieve a monitoring network design that has a more even performance or covers the conditions of interest (e.g., floods or droughts) best.
Optimizing Site Selection in Urban Areas in Northern Switzerland
NASA Astrophysics Data System (ADS)
Plenkers, K.; Kraft, T.; Bethmann, F.; Husen, S.; Schnellmann, M.
2012-04-01
There is a need to observe weak seismic events (M<2) in areas close to potential nuclear-waste repositories or nuclear power plants, in order to analyze the underlying seismo-tectonic processes and estimate their seismic hazard. We are therefore densifying the existing Swiss Digital Seismic Network in northern Switzerland by additional 20 stations. The new network that will be in operation by the end of 2012, aims at observing seismicity in northern Switzerland with a completeness of M_c=1.0 and a location error < 0.5 km in epicenter and < 2 km in focal depth. Monitoring of weak seismic events in this region is challenging, because the area of interest is densely populated and geology is dominated by the Swiss molasse basin. A optimal network-design and a thoughtful choice for station-sites is, therefore, mandatory. To help with decision making we developed a step-wise approach to find the optimum network configuration. Our approach is based on standard network optimization techniques regarding the localization error. As a new feature, our approach uses an ambient noise model to compute expected signal-to-noise ratios for a given site. The ambient noise model uses information on land use and major infrastructures such as highways and train lines. We ran a series of network optimizations with increasing number of stations until the requirements regarding localization error and magnitude of completeness are reached. The resulting network geometry serves as input for the site selection. Site selection is done by using a newly developed multi-step assessment-scheme that takes into account local noise level, geology, infrastructure, and costs necessary to realize the station. The assessment scheme is weighting the different parameters and the most promising sites are identified. In a first step, all potential sites are classified based on information from topographic maps and site inspection. In a second step, local noise conditions are measured at selected sites. We
Multi-off-grid methods in multi-step integration of ordinary differential equations
NASA Technical Reports Server (NTRS)
Beaudet, P. R.
1974-01-01
Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.
Multi-step Loading of Human Minichromosome Maintenance Proteins in Live Human Cells*
Symeonidou, Ioanna-Eleni; Kotsantis, Panagiotis; Roukos, Vassilis; Rapsomaniki, Maria-Anna; Grecco, Hernán E.; Bastiaens, Philippe; Taraviras, Stavros; Lygerou, Zoi
2013-01-01
Once-per-cell cycle replication is regulated through the assembly onto chromatin of multisubunit protein complexes that license DNA for a further round of replication. Licensing consists of the loading of the hexameric MCM2–7 complex onto chromatin during G1 phase and is dependent on the licensing factor Cdt1. In vitro experiments have suggested a two-step binding mode for minichromosome maintenance (MCM) proteins, with transient initial interactions converted to stable chromatin loading. Here, we assess MCM loading in live human cells using an in vivo licensing assay on the basis of fluorescence recovery after photobleaching of GFP-tagged MCM protein subunits through the cell cycle. We show that, in telophase, MCM2 and MCM4 maintain transient interactions with chromatin, exhibiting kinetics similar to Cdt1. These are converted to stable interactions from early G1 phase. The immobile fraction of MCM2 and MCM4 increases during G1 phase, suggestive of reiterative licensing. In late G1 phase, a large fraction of MCM proteins are loaded onto chromatin, with maximal licensing observed just prior to S phase onset. Fluorescence loss in photobleaching experiments show subnuclear concentrations of MCM-chromatin interactions that differ as G1 phase progresses and do not colocalize with sites of DNA synthesis in S phase. PMID:24158436
Multi-Step Attack Detection via Bayesian Modeling under Model Parameter Uncertainty
ERIC Educational Resources Information Center
Cole, Robert
2013-01-01
Organizations in all sectors of business have become highly dependent upon information systems for the conduct of business operations. Of necessity, these information systems are designed with many points of ingress, points of exposure that can be leveraged by a motivated attacker seeking to compromise the confidentiality, integrity or…
A multi-step solvent-free mechanochemical route to indium(iii) complexes.
Wang, Jingyi; Ganguly, Rakesh; Yongxin, Li; Díaz, Jesus; Soo, Han Sen; García, Felipe
2016-05-10
Mechanochemistry is well-established in the solid-phase synthesis of inorganic materials but has rarely been employed for molecular syntheses. In recent years, there has been nascent interest in 'greener' synthetic methods with less solvent, higher yields, and shorter reaction times being especially appealing to the fine chemicals and inorganic catalyst industries. Herein, we demonstrate that main-group indium(iii) complexes featuring bis(imino)acenaphthene (BIAN) ligands are readily accessible through a mechanochemical milling approach. The synthetic methodology reported herein not only bypasses the use of large solvent quantities and transition metal reagents for ligand synthesis, but also reduces reaction times dramatically. These new main-group complexes exhibit the potential to be reduced to indium(i) compounds, which may be employed as photosensitizers in organic catalyses and functional materials. PMID:27112317
Use of DBMS in Multi-step Information Systems for LANDSAT
NASA Technical Reports Server (NTRS)
Noll, C. E.
1984-01-01
Data are obtained by the thematic mapper on LANDSAT 4 in seven bands and are telemetered and electronically recorded at ground station where the data must be geometrically and rediometrically corrected before a photographic image is produced. Current system characteristics for processing this information are described including the menu for data products reports. The tracking system provides up-to-date and complete information and requires that production stages adhere to the inherent DBMS structure. The concept can be applied to any procedures requiring status information.
Modeling the Auto-Ignition of Biodiesel Blends with a Multi-Step Model
Toulson, Dr. Elisa; Allen, Casey M; Miller, Dennis J; McFarlane, Joanna; Schock, Harold; Lee, Tonghun
2011-01-01
There is growing interest in using biodiesel in place of or in blends with petrodiesel in diesel engines; however, biodiesel oxidation chemistry is complicated to directly model and existing surrogate kinetic models are very large, making them computationally expensive. The present study describes a method for predicting the ignition behavior of blends of n-heptane and methyl butanoate, fuels whose blends have been used in the past as a surrogate for biodiesel. The autoignition is predicted using a multistep (8-step) model in order to reduce computational time and make this a viable tool for implementation into engine simulation codes. A detailed reaction mechanism for n-heptane-methyl butanoate blends was used as a basis for validating the multistep model results. The ignition delay trends predicted by the multistep model for the n-heptane-methyl butanoate blends matched well with that of the detailed CHEMKIN model for the majority of conditions tested.
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less
The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation
Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.
2015-12-01
Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
Multi-step shot noise spectrum induced by a local large spin
NASA Astrophysics Data System (ADS)
Niu, Peng-Bin; Shi, Yun-Long; Sun, Zhu; Nie, Yi-Hang
2015-12-01
We use non-equilibrium Green’s function method to analyze the shot noise spectrum of artificial single molecular magnets (ASMM) model in the strong spin-orbit coupling limit in sequential tunneling regime, mainly focusing on the effects of local large spin. In the linear response regime, the shot noise shows 2S + 1 peaks and is strongly spin-dependent. In the nonlinear response regime, one can observe 2S + 1 steps in shot noise and Fano factor. In these steps one can see the significant enhancement effect due to the spin-dependent multi-channel process of local large spin, which reduces electron correlations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11504210, 11504211, 11504212, 11274207, 11274208, 11174115, and 11325417), the Key Program of the Ministry of Education of China (Grant No. 212018), the Scientific and Technological Project of Shanxi Province, China (Grant No. 2015031002-2), the Natural Science Foundation of Shanxi Province, China (Grant Nos. 2013011007-2 and 2013021010-5), and the Outstanding Innovative Teams of Higher Learning Institutions of Shanxi Province, China.
Simulation of multi-steps thermal transition in 2D spin-crossover nanoparticles
NASA Astrophysics Data System (ADS)
Jureschi, Catalin-Maricel; Pottier, Benjamin-Louis; Linares, Jorge; Richard Dahoo, Pierre; Alayli, Yasser; Rotaru, Aurelian
2016-04-01
We have used an Ising like model to study the thermal behavior of a 2D spin crossover (SCO) system embedded in a matrix. The interaction parameter between edge SCO molecules and its local environment was included in the standard Ising like model as an additional term. The influence of the system's size and the ratio between the number of edge molecules and the other molecules were also discussed.
NASA Astrophysics Data System (ADS)
Siepmann, J. Ilja; Bai, Peng; Tsapatsis, Michael; Knight, Chris; Deem, Michael W.
2015-03-01
Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure and the type or location of active sites. To date, 213 framework types have been synthesized and >330000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol beyond the ethanol/water azeotropic concentration in a single separation step from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modeling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds. Financial support from the Department of Energy Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences under Award DE-FG02-12ER16362 is gratefully acknowledged.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
Energy Science and Technology Software Center (ESTSC)
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less
Investigations into the Optimization of Multi-Source Strength Brachytherapy Treatment Procedures
D. L. Henderson; S. Yoo; B.R. Thomadsen
2002-09-30
The goal of this project is to investigate the use of multi-strength and multi-specie radioactive sources in permanent prostate implant brachytherapy. In order to fulfill the requirement for an optimal dose distribution, the prescribed dose should be delivered to the target in a nearly uniform dose distribution while simultaneously sparing sensitive structures. The treatment plan should use a small number of needles and sources while satisfying the treatment requirements. The hypothesis for the use of multi-strength and/or multi-specie sources is that a better treatment plan using fewer sources and needles could be obtained than by treatment plans using single-strength sources could reduce the overall number of sources used for treatment. We employ a recently developed greedy algorithm based on the adjoint concept as the optimization search engine. The algorithm utilizes and ''adjoint ratio'', which provides a means of ranking source positions, as the pseudo-objective function. It ha s been shown that the greedy algorithm can solve the optimization problem efficiently and arrives at a clinically acceptable solution in less than 10 seconds. Our study was inclusive, that is there was no combination of sources that clearly stood out from the others and could therefore be considered the preferred set of sources for treatment planning. Source strengths of 0.2 mCi (low), 0.4 mCi (medium), and 0.6 mCi (high) of {sup 125}I in four different combinations were used for the multi-strength source study. The combination of high- and medium-strength sources achieved a more uniform target dose distribution due to few source implants whereas the combination of low-and medium-strength sources achieved better sparing of sensitive tissues including that of the single-strength 0.4 mCi base case. {sup 125}I at 0.4 mCi and {sup 192}Ir at 0.12 mCi and 0.25 mCi source strengths were used for the multi-specie source study. This study also proved inconclusive , Treatment plans using a
Multidisciplinary Optimization for Aerospace Using Genetic Optimization
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.
2007-01-01
In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.
Optimization of aerospace structures
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Patnaik, Surya N.
1994-01-01
Research carried out is grouped under two topics: (1) Design Optimization, and (2) Integrated Force Method of Analysis. Design Optimization Research Topics are singularity alleviation enhances structural optimization methods, computer based design capability extended through substructure synthesis, and optimality criteria provides optimum design for a select class of structural problems. Integrated Force Method of Analysis Research Topics are boundary compatibility formulation improves stress analysis of shell structures. Brief descriptions of the four topics are appended.
Cyclone performance and optimization
Leith, D.
1990-12-15
An empirical model for predicting pressure drop across a cyclone, developed by Dirgo (1988), is presented. The model was developed through a statistical analysis of pressure drop data for 98 cyclone designs. This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop on the dimension ratios of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization is determined. The optimization results are used to develop a design procedure for optimized cyclones. 33 refs., 10 figs., 4 tabs.
McGuire-Snieckus, Rebecca
2014-01-01
Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to ‘stop negative thought cycles’ and to ‘challenge unhelpful thoughts’. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and ‘unhelpful’ cognitions, does this minimise wider systemic social and economic influences on mental health? PMID:25237497
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Wheeler, Ward C
2003-08-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. PMID:14531408
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Homotopy optimization methods for global optimization.
Dunlavy, Daniel M.; O'Leary, Dianne P.
2005-12-01
We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Structural optimization using optimality criteria methods
NASA Technical Reports Server (NTRS)
Khot, N. S.; Berke, L.
1984-01-01
Optimality criteria methods take advantage of some concepts as those of statically determinate or indeterminate structures, and certain variational principles of structural dynamics, to develop efficient algorithms for the sizing of structures that are subjected to stiffness-related constraints. Some of the methods and iterative strategies developed over the last decade for calculations of the Lagrange multipliers in stressand displacement-limited problems, as well as for satisfying the appropriate optimality criterion, are discussed. The application of these methods are illustrated by solving problems with stress and displacement constraints.