Science.gov

Sample records for optimization multi-step greedy

  1. Optimal multi-step collocation: application to the space-wise approach for GOCE data analysis

    NASA Astrophysics Data System (ADS)

    Reguzzoni, Mirko; Tselfes, Nikolaos

    2009-01-01

    Collocation is widely used in physical geodesy. Its application requires to solve systems with a dimension equal to the number of observations, causing numerical problems when many observations are available. To overcome this drawback, tailored step-wise techniques are usually applied. An example of these step-wise techniques is the space-wise approach to the GOCE mission data processing. The original idea of this approach was to implement a two-step procedure, which consists of first predicting gridded values at satellite altitude by collocation and then deriving the geo-potential spherical harmonic coefficients by numerical integration. The idea was generalized to a multi-step iterative procedure by introducing a time-wise Wiener filter to reduce the highly correlated observation noise. Recent studies have shown how to optimize the original two-step procedure, while the theoretical optimization of the full multi-step procedure is investigated in this work. An iterative operator is derived so that the final estimated spherical harmonic coefficients are optimal with respect to the Wiener-Kolmogorov principle, as if they were estimated by a direct collocation. The logical scheme used to derive this optimal operator can be applied not only in the case of the space-wise approach but, in general, for any case of step-wise collocation. Several numerical tests based on simulated realistic GOCE data are performed. The results show that adding a pre-processing time-wise filter to the two-step procedure of data gridding and spherical harmonic analysis is useful, in the sense that the accuracy of the estimated geo-potential coefficients is improved. This happens because, in its practical implementation, the gridding is made by collocation over local patches of data, while the observation noise has a time-correlation so long that it cannot be treated inside the patch size. Therefore, the multi-step operator, which is in theory equivalent to the two-step operator and to the

  2. Optimization of a Multi-Step Procedure for Isolation of Chicken Bone Collagen

    PubMed Central

    2015-01-01

    Chicken bone is not adequately utilized despite its high nutritional value and protein content. Although not a common raw material, chicken bone can be used in many different ways besides manufacturing of collagen products. In this study, a multi-step procedure was optimized to isolate chicken bone collagen for higher yield and quality for manufacture of collagen products. The chemical composition of chicken bone was 2.9% nitrogen corresponding to about 15.6% protein, 9.5% fat, 14.7% mineral and 57.5% moisture. The lowest amount of protein loss was aimed along with the separation of the highest amount of visible impurities, non-collagen proteins, minerals and fats. Treatments under optimum conditions removed 57.1% of fats and 87.5% of minerals with respect to their initial concentrations. Meanwhile, 18.6% of protein and 14.9% of hydroxyproline were lost, suggesting that a selective separation of non-collagen components and isolation of collagen were achieved. A significant part of impurities were selectively removed and over 80% of the original collagen was preserved during the treatments. PMID:26761863

  3. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  4. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm.

    PubMed

    Lu, Guangquan; Xiong, Ying; Ding, Chuan; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration.

  5. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm

    PubMed Central

    Lu, Guangquan; Xiong, Ying; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732

  6. Layout Optimization Method for Magnetic Circuit using Multi-step Utilization of Genetic Algorithm Combined with Design Space Reduction

    NASA Astrophysics Data System (ADS)

    Okamoto, Yoshifumi; Tominaga, Yusuke; Sato, Shuji

    The layout optimization with the ON-OFF information of magnetic material in finite elements is one of the most attractive tools in initial conceptual and practical design of electrical machinery for engineers. The heuristic algorithms based on the random search allow the engineers to define the general-purpose objects, however, there are many iterations of finite element analysis, and it is difficult to realize the practical solution without island and void distribution by using direct search method, for example, simulated annealing (SA), genetic algorithm (GA), and so on. This paper presents the layout optimization method based on GA. Proposed method can arrive at the practical solution by means of multi-step utilization of GA, and the convergence speed is considerably improved by using the combination with the reduction process of design space.

  7. Greedy Criterion in Orthogonal Greedy Learning.

    PubMed

    Xu, Lin; Lin, Shaobo; Zeng, Jinshan; Liu, Xia; Fang, Yi; Xu, Zongben

    2017-02-23

    Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we found that SGD is not the unique greedy criterion and introduced a new greedy criterion, called as ''δ-greedy threshold'' for learning. Based on this new greedy criterion, we derived a straightforward termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Numerical experiments are also provided to support that this new scheme can achieve almost optimal generalization performance while requiring less computation than OGL.

  8. Optimizing multi-step B-side charge separation in photosynthetic reaction centers from Rhodobacter capsulatus

    SciTech Connect

    Faries, Kaitlyn M.; Kressel, Lucas L.; Dylla, Nicholas P.; Wander, Marc J.; Hanson, Deborah K.; Holten, Dewey; Laible, Philip D.; Kirmaier, Christine

    2016-02-01

    Using high-throughput methods for mutagenesis, protein isolation and charge-separation functionality, we have assayed 40 Rhodobacter capsulatus reaction center (RC) mutants for their P+ QB- yield (P is a dimer of bacteriochlorophylls and Q is a ubiquinone) as produced using the normally inactive B-side cofactors BB and HB (where B is a bacteriochlorophyll and H is a bacteriopheophytin). Two sets of mutants explore all possible residues at M131 (M polypeptide, native residue Val near HB) in tandem with either a fixed His or a fixed Asn at L181 (L polypeptide, native residue Phe near BB). A third set of mutants explores all possible residues at L181 with a fixed Glu at M131 that can form a hydrogen bond to HB. For each set of mutants, the results of a rapid millisecond screening assay that probes the yield of P+ QB- are compared among that set and to the other mutants reported here or previously. For a subset of eight mutants, the rate constants and yields of the individual B-side electron transfer processes are determined via transient absorption measurements spanning 100 fs to 50 μs. The resulting ranking of mutants for their yield of P+ QB- from ultrafast experiments is in good agreement with that obtained from the millisecond screening assay, further validating the efficient, high-throughput screen for B-side transmembrane charge separation. Results from mutants that individually show progress toward optimization of P+ HB- → P+ QB- electron transfer or initial P* → P+ HB- conversion highlight unmet challenges of optimizing both processes simultaneously.

  9. Optimal Fusion Estimation with Multi-Step Random Delays and Losses in Transmission

    PubMed Central

    Caballero-Águila, Raquel; Hermoso-Carazo, Aurora; Linares-Pérez, Josefa

    2017-01-01

    This paper is concerned with the optimal fusion estimation problem in networked stochastic systems with bounded random delays and packet dropouts, which unavoidably occur during the data transmission in the network. The measured outputs from each sensor are perturbed by random parameter matrices and white additive noises, which are cross-correlated between the different sensors. Least-squares fusion linear estimators including filter, predictor and fixed-point smoother, as well as the corresponding estimation error covariance matrices are designed via the innovation analysis approach. The proposed recursive algorithms depend on the delay probabilities at each sampling time, but do not to need to know if a particular measurement is delayed or not. Moreover, the knowledge of the signal evolution model is not required, as the algorithms need only the first and second order moments of the processes involved. Some of the practical situations covered by the proposed system model with random parameter matrices are analyzed and the influence of the delays in the estimation accuracy are examined in a numerical example. PMID:28524112

  10. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    PubMed Central

    Cao, Leilei; Xu, Lihong; Goodman, Erik D.

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared. PMID:27293421

  11. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems.

    PubMed

    Cao, Leilei; Xu, Lihong; Goodman, Erik D

    2016-01-01

    A Guiding Evolutionary Algorithm (GEA) with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  12. Small-tip-angle spokes pulse design using interleaved greedy and local optimization methods.

    PubMed

    Grissom, William A; Khalighi, Mohammad-Mehdi; Sacolick, Laura I; Rutt, Brian K; Vogel, Mika W

    2012-11-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods.

  13. Uncovering the community structure in signed social networks based on greedy optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yan, Jiaqi; Yang, Yu; Chen, Junhua

    2017-05-01

    The formality of signed relationships has been recently adopted in a lot of complicated systems. The relations among these entities are complicated and multifarious. We cannot indicate these relationships only by positive links, and signed networks have been becoming more and more universal in the study of social networks when community is being significant. In this paper, to identify communities in signed networks, we exploit a new greedy algorithm, taking signs and the density of these links into account. The main idea of the algorithm is the initial procedure of signed modularity and the corresponding update rules. Specially, we employ the “Asymmetric and Constrained Belief Evolution” procedure to evaluate the optimal number of communities. According to the experimental results, the algorithm is proved to be able to run well. More specifically, the proposed algorithm is very efficient for these networks with medium size, both dense and sparse.

  14. Greedy scheduling of cellular self-replication leads to optimal doubling times with a log-Frechet distribution.

    PubMed

    Pugatch, Rami

    2015-02-24

    Bacterial self-replication is a complex process composed of many de novo synthesis steps catalyzed by a myriad of molecular processing units, e.g., the transcription-translation machinery, metabolic enzymes, and the replisome. Successful completion of all production tasks requires a schedule-a temporal assignment of each of the production tasks to its respective processing units that respects ordering and resource constraints. Most intracellular growth processes are well characterized. However, the manner in which they are coordinated under the control of a scheduling policy is not well understood. When fast replication is favored, a schedule that minimizes the completion time is desirable. However, if resources are scarce, it is typically computationally hard to find such a schedule, in the worst case. Here, we show that optimal scheduling naturally emerges in cellular self-replication. Optimal doubling time is obtained by maintaining a sufficiently large inventory of intermediate metabolites and processing units required for self-replication and additionally requiring that these processing units be "greedy," i.e., not idle if they can perform a production task. We calculate the distribution of doubling times of such optimally scheduled self-replicating factories, and find it has a universal form-log-Frechet, not sensitive to many microscopic details. Analyzing two recent datasets of Escherichia coli growing in a stationary medium, we find excellent agreement between the observed doubling-time distribution and the predicted universal distribution, suggesting E. coli is optimally scheduling its replication. Greedy scheduling appears as a simple generic route to optimal scheduling when speed is the optimization criterion. Other criteria such as efficiency require more elaborate scheduling policies and tighter regulation.

  15. A fast, space-efficient average-case algorithm for the 'Greedy' Triangulation of a point set, and a proof that the Greedy Triangulation is not approximately optimal

    NASA Technical Reports Server (NTRS)

    Manacher, G. K.; Zobrist, A. L.

    1979-01-01

    The paper addresses the problem of how to find the Greedy Triangulation (GT) efficiently in the average case. It is noted that the problem is open whether there exists an efficient approximation algorithm to the Optimum Triangulation. It is first shown how in the worst case, the GT may be obtained in time O(n to the 3) and space O(n). Attention is then given to how the algorithm may be slightly modified to produce a time O(n to the 2), space O(n) solution in the average case. Finally, it is mentioned that Gilbert has found a worst case solution using totally different techniques that require space O(n to the 2) and time O(n to the 2 log n).

  16. Automatic Synthesis Of Greedy Programs

    NASA Astrophysics Data System (ADS)

    Bhansali, Sanjay; Miriyala, Kanth; Harandi, Mehdi T.

    1989-03-01

    This paper describes a knowledge based approach to automatically generate Lisp programs using the Greedy method of algorithm design. The system's knowledge base is composed of heuristics for recognizing problems amenable to the Greedy method and knowledge about the Greedy strategy itself (i.e., rules for local optimization, constraint satisfaction, candidate ordering and candidate selection). The system has been able to generate programs for a wide variety of problems including the job-scheduling problem, the 0-1 knapsack problem, the minimal spanning tree problem, and the problem of arranging files on tape to minimize access time. For the special class of problems called matroids, the synthesized program provides optimal solutions, whereas for most other problems the solutions are near-optimal.

  17. Exploring Maps with Greedy Navigators

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hoon; Holme, Petter

    2012-03-01

    During the last decade of network research focusing on structural and dynamical properties of networks, the role of network users has been more or less underestimated from the bird’s-eye view of global perspective. In this era of global positioning system equipped smartphones, however, a user’s ability to access local geometric information and find efficient pathways on networks plays a crucial role, rather than the globally optimal pathways. We present a simple greedy spatial navigation strategy as a probe to explore spatial networks. These greedy navigators use directional information in every move they take, without being trapped in a dead end based on their memory about previous routes. We suggest that the centralities measures have to be modified to incorporate the navigators’ behavior, and present the intriguing effect of navigators’ greediness where removing some edges may actually enhance the routing efficiency, which is reminiscent of Braess’s paradox. In addition, using samples of road structures in large cities around the world, it is shown that the navigability measure we define reflects unique structural properties, which are not easy to predict from other topological characteristics. In this respect, we believe that our routing scheme significantly moves the routing problem on networks one step closer to reality, incorporating the inevitable incompleteness of navigators’ information.

  18. Optimal schedules of fractionated radiation therapy by way of the greedy principle: biologically-based adaptive boosting

    NASA Astrophysics Data System (ADS)

    Hanin, Leonid; Zaider, Marco

    2014-08-01

    We revisit a long-standing problem of optimization of fractionated radiotherapy and solve it in considerable generality under the following three assumptions only: (1) repopulation of clonogenic cancer cells between radiation exposures follows linear birth-and-death Markov process; (2) clonogenic cancer cells do not interact with each other; and (3) the dose response function s(D) is decreasing and logarithmically concave. Optimal schedules of fractionated radiation identified in this work can be described by the following ‘greedy’ principle: give the maximum possible dose as soon as possible. This means that upper bounds on the total dose and the dose per fraction reflecting limitations on the damage to normal tissue, along with a lower bound on the time between successive fractions of radiation, determine the optimal radiation schedules completely. Results of this work lead to a new paradigm of dose delivery which we term optimal biologically-based adaptive boosting (OBBAB). It amounts to (a) subdividing the target into regions that are homogeneous with respect to the maximum total dose and maximum dose per fraction allowed by the anatomy and biological properties of the normal tissue within (or adjacent to) the region in question and (b) treating each region with an individual optimal schedule determined by these constraints. The fact that different regions may be treated to different total dose and dose per fraction mean that the number of fractions may also vary between regions. Numerical evidence suggests that OBBAB produces significantly larger tumor control probability than the corresponding conventional treatments.

  19. I was greedy, too.

    PubMed

    Coutu, Diane L

    2003-02-01

    Americans are outraged at the greediness of Wall Street analysts, dot-com entrepreneurs, and, most of all, chief executive officers. How could Tyco's Dennis Kozlowski use company funds to throw his wife a million-dollar birthday bash on an Italian island? How could Enron's Ken Lay sell thousands of shares of his company's once high-flying stock just before it crashed, leaving employees with nothing? Even America's most popular domestic guru, Martha Stewart, is suspected of having her hand in the cookie jar. To some extent, our outrage may be justified, writes HBR senior editor Diane Coutu. And yet, it's easy to forget that just a couple years ago these same people were lauded as heroes. Many Americans wanted nothing more, in fact, than to emulate them, to share in their fortunes. Indeed, we spent an enormous amount of time talking and thinking about double-digit returns, IPOs, day trading, and stock options. It could easily be argued that it was public indulgence in corporate money lust that largely created the mess we're now in. It's time to take a hard look at greed, both in its general form and in its peculiarly American incarnation, says Coutu. If Federal Reserve Board chairman Alan Greenspan was correct in telling Congress that "infectious greed" contaminated U.S. business, then we need to try to understand its causes--and how the average American may have contributed to it. Why did so many of us fall prey to greed? With a deep, almost reflexive trust in the free market, are Americans somehow greedier than other peoples? And as we look at the wreckage from the 1990s, can we be sure it won't happen again?

  20. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  1. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  2. Droplet-based microsystem for multi-step bioreactions.

    PubMed

    Wang, Fang; Burns, Mark A

    2010-06-01

    A droplet-based microfluidic platform was used to perform on-chip droplet generation, merging and mixing for applications in multi-step reactions and assays. Submicroliter-sized droplets can be produced separately from three identical droplet-generation channels and merged together in a single chamber. Three different mixing strategies were used for mixing the merged droplet. For pure diffusion, the reagents were mixed in approximately 10 min. Using flow around the stationary droplet to induce circulatory flow within the droplet, the mixing time was decreased to approximately one minute. The shortest mixing time (10 s) was obtained with bidirectional droplet motion between the chamber and channel, and optimization could result in a total time of less than 1 s. We also tested this on-chip droplet generation and manipulation platform using a two-step thermal cycled bioreaction: nested TaqMan PCR. With the same concentration of template DNA, the two-step reaction in a well-mixed merged droplet shows a cycle threshold of approximately 6 cycles earlier than that in the diffusively mixed droplet, and approximately 40 cycles earlier than the droplet-based regular (single-step) TaqMan PCR.

  3. Multi-step biocatalytic depolymerization of lignin.

    PubMed

    Picart, Pere; Liu, Haifeng; Grande, Philipp M; Anders, Nico; Zhu, Leilei; Klankermayer, Jürgen; Leitner, Walter; Domínguez de María, Pablo; Schwaneberg, Ulrich; Schallmey, Anett

    2017-08-01

    Lignin is a biomass-derived aromatic polymer that has been identified as a potential renewable source of aromatic chemicals and other valuable compounds. The valorization of lignin, however, represents a great challenge due to its high inherent functionalization, what compromises the identification of chemical routes for its selective depolymerization. In this work, an in vitro biocatalytic depolymerization process is presented, that was applied to lignin samples obtained from beech wood through OrganoCat pretreatment, resulting in a mixture of lignin-derived aromatic monomers. The reported biocracking route comprises first a laccase-mediator system to specifically oxidize the Cα hydroxyl group in the β-O-4 structure of lignin. Subsequently, selective β-O-4 ether cleavage of the oxidized β-O-4 linkages is achieved with β-etherases and a glutathione lyase. The combined enzymatic approach yielded an oily fraction of low-molecular-mass aromatic compounds, comprising coniferylaldehyde and other guaiacyl and syringyl units, as well as some larger (soluble) fractions. Upon further optimization, the reported biocatalytic route may open a valuable approach for lignin processing and valorization under mild reaction conditions.

  4. Multi-step wrought processing of TiAl-based alloys

    SciTech Connect

    Fuchs, G.E.

    1997-04-01

    Wrought processing will likely be needed for fabrication of a variety of TiAl-based alloy structural components. Laboratory and development work has usually relied on one-step forging to produce test material. Attempts to scale-up TiAl-based alloy processing has indicated that multi-step wrought processing is necessary. The purpose of this study was to examine potential multi-step processing routes, such as two-step isothermal forging and extrusion + isothermal forging. The effects of processing (I/M versus P/M), intermediate recrystallization heat treatments and processing route on the tensile and creep properties of Ti-48Al-2Nb-2Cr alloys were examined. The results of the testing were then compared to samples from the same heats of materials processed by one-step routes. Finally, by evaluating the effect of processing on microstructure and properties, optimized and potentially lower cost processing routes could be identified.

  5. 48 CFR 15.202 - Advisory multi-step process.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... process. 15.202 Section 15.202 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... concept, past performance, and limited pricing information). At a minimum, the notice shall...

  6. 48 CFR 15.202 - Advisory multi-step process.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... process. 15.202 Section 15.202 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... concept, past performance, and limited pricing information). At a minimum, the notice shall...

  7. 48 CFR 15.202 - Advisory multi-step process.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... process. 15.202 Section 15.202 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... concept, past performance, and limited pricing information). At a minimum, the notice shall...

  8. 48 CFR 15.202 - Advisory multi-step process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... process. 15.202 Section 15.202 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... concept, past performance, and limited pricing information). At a minimum, the notice shall...

  9. On Stable Marriages and Greedy Matchings

    SciTech Connect

    Manne, Fredrik; Naim, Md; Lerring, Hakon; Halappanavar, Mahantesh

    2016-12-11

    Research on stable marriage problems has a long and mathematically rigorous history, while that of exploiting greedy matchings in combinatorial scientific computing is a younger and less developed research field. In this paper we consider the relationships between these two areas. In particular we show that several problems related to computing greedy matchings can be formulated as stable marriage problems and as a consequence several recently proposed algorithms for computing greedy matchings are in fact special cases of well known algorithms for the stable marriage problem. However, in terms of implementations and practical scalable solutions on modern hardware, the greedy matching community has made considerable progress. We show that due to the strong relationship between these two fields many of these results are also applicable for solving stable marriage problems.

  10. Power transmission coefficients for multi-step index optical fibres.

    PubMed

    Aldabaldetreku, Gotzon; Zubia, Joseba; Durana, Gaizka; Arrue, Jon

    2006-02-20

    The aim of the present paper is to provide a single analytical expression of the power transmission coefficient for leaky rays in multi-step index (MSI) fibres. This expression is valid for all tunnelling and refracting rays and allows us to evaluate numerically the power attenuation along an MSI fibre of an arbitrary number of layers. We validate our analysis by comparing the results obtained for limit cases of MSI fibres with those corresponding to step-index (SI) and graded-index (GI) fibres. We also make a similar comparison between this theoretical expression and the use of the WKB solutions of the scalar wave equation.

  11. Microwaves in drug discovery and multi-step synthesis.

    PubMed

    Alexandre, François-René; Domon, Lisianne; Frère, Stéphane; Testard, Alexandra; Thiéry, Valérie; Besson, Thierry

    2003-01-01

    The interest of microwaves in drug discovery and multi-step synthesis is exposed with the aim of describing our strategy. These studies are connected with our work on the synthesis of original heterocyclic compounds with potential pharmaceutical value. Reactions in the presence of solvent and solvent-free synthesis can be realised under a variety of conditions; for some of these selected results are given, and where available, results from comparison with the same solvent-free conditions but with classical heating are given.

  12. A simple greedy algorithm for reconstructing pedigrees.

    PubMed

    Cowell, Robert G

    2013-02-01

    This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study.

  13. Multi-step prediction of physiological tremor for robotics applications.

    PubMed

    Veluvolu, K C; Tatinati, S; Hong, S M; Ang, W T

    2013-01-01

    The performance of surgical robotic devices in real-time mainly depends on phase-delay in sensors and filtering process. A phase delay of 16-20 ms is unavoidable in these robotics procedures due to the presence of hardware low pass filter in sensors and pre-filtering required in later stages of cancellation. To overcome this phase delay, we employ multi-step prediction with band limited multiple Fourier linear combiner (BMFLC) and Autoregressive (AR) methods. Results show that the overall accuracy is improved by 60% for tremor estimation compared to single-step prediction methods in the presence of phase delay. Experimental results with the proposed methods for 1-DOF tremor estimation highlight the improvement.

  14. Research on processing medicinal herbs with multi-steps infrared macro-fingerprint method.

    PubMed

    Yu, Lu; Sun, Su-Qin; Fan, Ke-Feng; Zhou, Qun; Noda, Isao

    2005-11-01

    How to apply rapid and effective method to research medicinal herbs, the representative of complicated mixture system, is the current study focus for analysts. The functions of non-processed and processed medicinal herbs are greatly different, so controlling the processing procedure is highly important for guarantee of the curative effect. Almost, the conventional criteria of processing are based on personal sensory experience. There is no scientific and impersonal benchmark. In this article, we take Rehmannia for example, conducting a systematic study on the process of braising Rehmannia with yellow wine by using the multi-steps infrared (IR) macro-fingerprint method. The method combines three steps: conventional Fourier transform infrared spectroscopy (FT-IR), second derivative spectroscopy, and two-dimensional infrared (2D-IR) correlation spectroscopy. Based on the changes in different types of IR spectra during the process, we can infer the optimal end-point of processing Rehmannia and the main transformations during the process. The result provides a scientific explanation to the traditional sensory experience based recipe: the end-point product is "dark as night and sweet as malt sugar". In conclusion, the multi-steps IR macro-fingerprint method, which is rapid and reasonable, can play an important role in controlling the processing of medicinal herbs.

  15. Suboptimal greedy power allocation schemes for discrete bit loading.

    PubMed

    Al-Hanafy, Waleed; Weiss, Stephan

    2013-01-01

    We consider low cost discrete bit loading based on greedy power allocation (GPA) under the constraints of total transmit power budget, target BER, and maximum permissible QAM modulation order. Compared to the standard GPA, which is optimal in terms of maximising the data throughput, three suboptimal schemes are proposed, which perform GPA on subsets of subchannels only. These subsets are created by considering the minimum SNR boundaries of QAM levels for a given target BER. We demonstrate how these schemes can significantly reduce the computational complexity required for power allocation, particularly in the case of a large number of subchannels. Two of the proposed algorithms can achieve near optimal performance including a transfer of residual power between subsets at the expense of a very small extra cost. By simulations, we show that the two near optimal schemes, while greatly reducing complexity, perform best in two separate and distinct SNR regions.

  16. Suboptimal Greedy Power Allocation Schemes for Discrete Bit Loading

    PubMed Central

    2013-01-01

    We consider low cost discrete bit loading based on greedy power allocation (GPA) under the constraints of total transmit power budget, target BER, and maximum permissible QAM modulation order. Compared to the standard GPA, which is optimal in terms of maximising the data throughput, three suboptimal schemes are proposed, which perform GPA on subsets of subchannels only. These subsets are created by considering the minimum SNR boundaries of QAM levels for a given target BER. We demonstrate how these schemes can significantly reduce the computational complexity required for power allocation, particularly in the case of a large number of subchannels. Two of the proposed algorithms can achieve near optimal performance including a transfer of residual power between subsets at the expense of a very small extra cost. By simulations, we show that the two near optimal schemes, while greatly reducing complexity, perform best in two separate and distinct SNR regions. PMID:24501578

  17. Multistep greedy algorithm identifies community structure in real-world and computer-generated networks

    NASA Astrophysics Data System (ADS)

    Schuetz, Philipp; Caflisch, Amedeo

    2008-08-01

    We have recently introduced a multistep extension of the greedy algorithm for modularity optimization. The extension is based on the idea that merging l pairs of communities (l>1) at each iteration prevents premature condensation into few large communities. Here, an empirical formula is presented for the choice of the step width l that generates partitions with (close to) optimal modularity for 17 real-world and 1100 computer-generated networks. Furthermore, an in-depth analysis of the communities of two real-world networks (the metabolic network of the bacterium E. coli and the graph of coappearing words in the titles of papers coauthored by Martin Karplus) provides evidence that the partition obtained by the multistep greedy algorithm is superior to the one generated by the original greedy algorithm not only with respect to modularity, but also according to objective criteria. In other words, the multistep extension of the greedy algorithm reduces the danger of getting trapped in local optima of modularity and generates more reasonable partitions.

  18. Efficient greedy algorithms for economic manpower shift planning

    NASA Astrophysics Data System (ADS)

    Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.

    2015-01-01

    Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.

  19. DE-FG02-05ER64001 Overcoming the hurdles of multi-step targeting (MST) for effective radioimmunotherapy of solid tumors

    SciTech Connect

    P.I. Steven M. Larson MD Co P.I. Nai-Kong Cheung MD, Ph.D.

    2009-09-21

    The 4 specific aims of this project are: (1) Optimization of MST to increase tumor uptake; (2) Antigen heterogeneity; (3) Characterization and reduction of renal uptake; and (4) Validation in vivo of optimized MST targeted therapy. This proposal focussed upon optimizing multistep immune targeting strategies for the treatment of cancer. Two multi-step targeting constructs were explored during this funding period: (1) anti-Tag-72 and (2) anti-GD2.

  20. Greedy Hypervolume Subset Selection in Low Dimensions.

    PubMed

    Guerreiro, Andreia P; Fonseca, Carlos M; Paquete, Luís

    2016-01-01

    Given a nondominated point set [Formula: see text] of size [Formula: see text] and a suitable reference point [Formula: see text], the Hypervolume Subset Selection Problem (HSSP) consists of finding a subset of size [Formula: see text] that maximizes the hypervolume indicator. It arises in connection with multiobjective selection and archiving strategies, as well as Pareto-front approximation postprocessing for visualization and/or interaction with a decision maker. Efficient algorithms to solve the HSSP are available only for the 2-dimensional case, achieving a time complexity of [Formula: see text]. In contrast, the best upper bound available for [Formula: see text] is [Formula: see text]. Since the hypervolume indicator is a monotone submodular function, the HSSP can be approximated to a factor of [Formula: see text] using a greedy strategy. In this article, greedy [Formula: see text]-time algorithms for the HSSP in 2 and 3 dimensions are proposed, matching the complexity of current exact algorithms for the 2-dimensional case, and considerably improving upon recent complexity results for this approximation problem.

  1. An Experimental Method for the Active Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. Angel

    2013-01-01

    Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…

  2. An Experimental Method for the Active Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. Angel

    2013-01-01

    Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…

  3. Diffusive behavior of a greedy traveling salesman

    NASA Astrophysics Data System (ADS)

    Lipowski, Adam; Lipowska, Dorota

    2011-06-01

    Using Monte Carlo simulations we examine the diffusive properties of the greedy algorithm in the d-dimensional traveling salesman problem. Our results show that for d=3 and 4 the average squared distance from the origin is proportional to the number of steps t. In the d=2 case such a scaling is modified with some logarithmic corrections, which might suggest that d=2 is the critical dimension of the problem. The distribution of lengths also shows marked differences between d=2 and d>2 versions. A simple strategy adopted by the salesman might resemble strategies chosen by some foraging and hunting animals, for which anomalous diffusive behavior has recently been reported and interpreted in terms of Lévy flights. Our results suggest that broad and Lévy-like distributions in such systems might appear due to dimension-dependent properties of a search space.

  4. Link community detection by non-negative matrix factorization with multi-step similarities

    NASA Astrophysics Data System (ADS)

    Tang, Xianchao; Yang, Guoqing; Xu, Tao; Feng, Xia; Wang, Xiao; Li, Qiannan; Liu, Yanbei

    2016-11-01

    Uncovering community structures is a fundamental and important problem in analyzing the complex networks. While most of the methods focus on identifying node communities, recent works show the intuitions and advantages of detecting link communities in networks. In this paper, we propose a non-negative matrix factorization (NMF) based method to detect the link community structures. Traditional NMF-based methods mainly consider the adjacency matrix as the representation of network topology, while the adjacency matrix only shows the relationship between immediate neighbor nodes, which does not take the relationship between non-neighbor nodes into consideration. This may greatly reduce the information contained in the network topology, and thus leads to unsatisfactory results. Here, we address this by introducing multi-step similarities using the graph random walk approach so that the similarities between non-neighbor nodes can be captured. Meanwhile, in order to reduce impact caused by self-similarities (similarities between nodes themselves) and increase importance gained from similarities between other different nodes, we add a penalty term to our objective function. Then an efficient optimization scheme for the objective function is derived. Finally, we test the proposed method on both synthetic and real networks. Experimental results demonstrate the effectiveness of the proposed approach.

  5. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    PubMed

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-04-11

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  6. A greedy global search algorithm for connecting unstable periodic orbits with low energy cost.

    NASA Astrophysics Data System (ADS)

    Tsirogiannis, G. A.; Markellos, V. V.

    2013-10-01

    A method for space mission trajectory design is presented in the form of a greedy global search algorithm. It uses invariant manifolds of unstable periodic orbits and its main advantage is that it performs a global search for the suitable legs of the invariant manifolds to be connected for a preliminary transfer design, as well as the appropriate points of the legs for maneuver application. The designed indirect algorithm bases the greedy choice on the optimality conditions that are assumed for the theoretical minimum transfer cost of a spacecraft when using invariant manifolds. The method is applied to a test case space mission design project in the Earth-Moon system and is found to compare favorably with previous techniques applied to the same project.

  7. Greedy bases in rank 2 quantum cluster algebras

    PubMed Central

    Lee, Kyungyong; Li, Li; Rupel, Dylan; Zelevinsky, Andrei

    2014-01-01

    We identify a quantum lift of the greedy basis for rank 2 coefficient-free cluster algebras. Our main result is that our construction does not depend on the choice of initial cluster, that it builds all cluster monomials, and that it produces bar-invariant elements. We also present several conjectures related to this quantum greedy basis and the triangular basis of Berenstein and Zelevinsky. PMID:24982182

  8. Multi-step plasma etching process for development of highly photosensitive InSb mid-IR FPAs

    NASA Astrophysics Data System (ADS)

    Seok, Chulkyun; Choi, Minkyung; Yang, In-Sang; Park, Sehun; Park, Yongjo; Yoon, Euijoon

    2014-06-01

    Reactive ion beam etching (RIBE) with CH4/H2/Ar or Cl2/Ar and ion beam etching (IBE) with Ar has been widely used for indium-contained compound semiconductors such as InAs, InP and InSb. To improve the performance of InSb FPAs, reduction of the ion-induced defects and the surface roughness is one of the key issues. To find the optimized plasma etching method for the fabrication of InSb devices, conventional plasma etching processes were comparatively investigated. RIBE of InSb was observed to generate residual by-products such as carbide and chloride causing the degradation of devices. On the other hand, very smooth surface was obtained by etching with N2. However, the etch rate of the N2 etching was too slow for the application to the device fabrication. As an alternative way to solve these problems, a multi-step plasma etching process, a combination of the Ar etching and the N2 etching, for InSb was developed. As gradually increasing the amount of N2 gas flow during the etching process, the plasma damage causing the surface roughen decreased and consequently smoother surface close to that of N2 RIE could be obtained. Furthermore, Raman analysis of the InSb surface after the plasma etching indicated clearly that the multi-step etching process was an effective approach in reducing the ion-induced damages on the surface.

  9. A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.

    PubMed

    Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin

    2016-06-21

    Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90  ±  1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5

  10. A Greedy reassignment algorithm for the PBS minimum monitor unit constraint

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin

    2016-06-01

    Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90  ±  1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5

  11. Starvation dynamics of a greedy forager

    NASA Astrophysics Data System (ADS)

    Bhat, U.; Redner, S.; Bénichou, O.

    2017-07-01

    We investigate the dynamics of a greedy forager that moves by random walking in an environment where each site initially contains one unit of food. Upon encountering a food-containing site, the forager eats all the food there and can subsequently hop an additional S steps without food before starving to death. Upon encountering an empty site, the forager goes hungry and comes one time unit closer to starvation. We investigate the new feature of forager greed; if the forager has a choice between hopping to an empty site or to a food-containing site in its nearest neighborhood, it hops preferentially towards food. If the neighboring sites all contain food or are all empty, the forager hops equiprobably to one of these neighbors. Paradoxically, the lifetime of the forager can depend non-monotonically on greed, and the sense of the non-monotonicity is opposite in one and two dimensions. Even more unexpectedly, the forager lifetime in one dimension is substantially enhanced when the greed is negative; here the forager tends to avoid food in its local neighborhood. We also determine the average amount of food consumed at the instant when the forager starves. We present analytic, heuristic, and numerical results to elucidate these intriguing phenomena.

  12. Adaptive Greedy Dictionary Selection for Web Media Summarization.

    PubMed

    Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo

    2017-01-01

    Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.

  13. On the origin of multi-step spin transition behaviour in 1D nanoparticles

    NASA Astrophysics Data System (ADS)

    Chiruta, Daniel; Jureschi, Catalin-Maricel; Linares, Jorge; Dahoo, Pierre Richard; Garcia, Yann; Rotaru, Aurelian

    2015-09-01

    To investigate the spin state switching mechanism in spin crossover (SCO) nanoparticles, a special attention is given to three-step thermally induced SCO behavior in 1D chains. An additional term is included in the standard Ising-like Hamiltonian to account for the border interaction between SCO molecules and its local environment. It is shown that this additional interaction, together with the short range interaction, drives the multi-steps thermal hysteretic behavior in 1D SCO systems. The relation between a polymeric matrix and this particular multi-step SCO phenomenon is discussed accordingly. Finally, the environmental influence on the SCO system's size is analyzed as well.

  14. Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof

    DOEpatents

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2006-01-17

    The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.

  15. Experimental study on multi-step creep properties of rat skins.

    PubMed

    Chen, Gang; Cui, Shibo; You, Lin; Li, Yan; Mei, Yun-Hui; Chen, Xu

    2015-06-01

    Tension, single-step creep, and multi-step creep of rat skins at room temperature were experimentally studied. We studied the effects of loading histories of high stress creep, low stress creep, and stress relaxation on multi-step creep. Microstructure of rat skins after prescribed tests were observed microscopically with the help of standard hematoxylin and eosin (H&E). The void ratios were also analyzed. The loading histories of high stress creep, low stress creep, and stress relaxation have significant influence on multi-step creep. We found that the creep strain and its rate in the steady-state stage and the creep-fatigue life of rat skins are sensitive to creep stress. Low stress creep after the loading history of high stress creep is characterized as a recovery of strain and a zero strain rate. Both the loading history of low stress creep and stress relaxation act as a recovery in multi-step creep, and they are driven by a same mechanism in the creep strain and the void ratio of rat skins. The loading history, of which sequence is as followings successively: low stress creep, stress relaxation, and high stress creep, helps to obtain the largest creep strain at the lowest void ratio. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Two-dimensional Paper‡ Networks: programmable fluidic disconnects for multi-step processes in shaped paper

    PubMed Central

    Trinh, Philip; Ball, Cameron; Fu, Elain; Yager, Paul

    2016-01-01

    Most laboratory assays take advantage of multi-step protocols to achieve high performance, but conventional paper-based tests (e.g., lateral flow tests) are generally limited to assays that can be carried out in a single fluidic step. We have developed two-dimensional paper networks (2DPNs) that use materials from lateral flow tests but reconfigure them to enable programming of multi-step reagent delivery sequences. The 2DPN uses multiple converging fluid inlets to control the arrival time of each fluid to a detection zone or reaction zone, and it requires a method to disconnect each fluid source in a corresponding timed sequence. Here, we present a method that allows programmed disconnection of fluid sources required for multi-step delivery. A 2DPN with legs of different lengths is inserted into a shared buffer well, and the dropping fluid surface disconnects each leg at in a programmable sequence. This approach could enable multi-step laboratory assays to be converted into simple point-of-care devices that have high performance yet remain easy to use. PMID:22037591

  17. Use of Chiral Oxazolidinones for a Multi-Step Synthetic Laboratory Module

    ERIC Educational Resources Information Center

    Betush, Matthew P.; Murphree, S. Shaun

    2009-01-01

    Chiral oxazolidinone chemistry is used as a framework for an advanced multi-step synthesis lab. The cost-effective and robust preparation of chiral starting materials is presented, as well as the use of chiral auxiliaries in a synthesis scheme that is appropriate for students currently in the second semester of the organic sequence. (Contains 1…

  18. Collaborative Activities for Solving Multi-Step Problems in General Chemistry

    ERIC Educational Resources Information Center

    Tortajada-Genaro, Luis Antonio

    2014-01-01

    The learning of solving multi-step problems is a relevant aim in chemical education for engineering students. In these questions, after analyzing initial data, a complex reasoning and an elaborated mathematical procedure is needed to achieve the correct numerical answer. However, many students are able to effectively use algorithms even with a…

  19. Multi-step routes of capuchin monkeys in a laser pointer traveling salesman task.

    PubMed

    Howard, Allison M; Fragaszy, Dorothy M

    2014-09-01

    Prior studies have claimed that nonhuman primates plan their routes multiple steps in advance. However, a recent reexamination of multi-step route planning in nonhuman primates indicated that there is no evidence for planning more than one step ahead. We tested multi-step route planning in capuchin monkeys using a pointing device to "travel" to distal targets while stationary. This device enabled us to determine whether capuchins distinguish the spatial relationship between goals and themselves and spatial relationships between goals and the laser dot, allocentrically. In Experiment 1, two subjects were presented with identical food items in Near-Far (one item nearer to subject) and Equidistant (both items equidistant from subject) conditions with a laser dot visible between the items. Subjects moved the laser dot to the items using a joystick. In the Near-Far condition, one subject demonstrated a bias for items closest to self but the other subject chose efficiently. In the second experiment, subjects retrieved three food items in similar Near-Far and Equidistant arrangements. Both subjects preferred food items nearest the laser dot and showed no evidence of multi-step route planning. We conclude that these capuchins do not make choices on the basis of multi-step look ahead strategies.

  20. Use of Chiral Oxazolidinones for a Multi-Step Synthetic Laboratory Module

    ERIC Educational Resources Information Center

    Betush, Matthew P.; Murphree, S. Shaun

    2009-01-01

    Chiral oxazolidinone chemistry is used as a framework for an advanced multi-step synthesis lab. The cost-effective and robust preparation of chiral starting materials is presented, as well as the use of chiral auxiliaries in a synthesis scheme that is appropriate for students currently in the second semester of the organic sequence. (Contains 1…

  1. Mechanical and Metallurgical Evolution of Stainless Steel 321 in a Multi-step Forming Process

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Bridier, F.; Gholipour, J.; Jahazi, M.; Wanjara, P.; Bocher, P.; Savoie, J.

    2016-04-01

    This paper examines the metallurgical evolution of AISI Stainless Steel 321 (SS 321) during multi-step forming, a process that involves cycles of deformation with intermediate heat treatment steps. The multi-step forming process was simulated by implementing interrupted uniaxial tensile testing experiments. Evolution of the mechanical properties as well as the microstructural features, such as twins and textures of the austenite and martensite phases, was studied as a function of the multi-step forming process. The characteristics of the Strain-Induced Martensite (SIM) were also documented for each deformation step and intermediate stress relief heat treatment. The results indicated that the intermediate heat treatments considerably increased the formability of SS 321. Texture analysis showed that the effect of the intermediate heat treatment on the austenite was minor and led to partial recrystallization, while deformation was observed to reinforce the crystallographic texture of austenite. For the SIM, an Olson-Cohen equation type was identified to analytically predict its formation during the multi-step forming process. The generated SIM was textured and weakened with increasing deformation.

  2. Biased and greedy random walks on two-dimensional lattices with quenched randomness: The greedy ant within a disordered environment

    NASA Astrophysics Data System (ADS)

    Mitran, T. L.; Melchert, O.; Hartmann, A. K.

    2013-12-01

    The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are “greedy” in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of “energy” for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρc, resulting from numerically exact and “static” simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems.

  3. Seismic signal time-frequency analysis based on multi-directional window using greedy strategy

    NASA Astrophysics Data System (ADS)

    Chen, Yingpin; Peng, Zhenming; Cheng, Zhuyuan; Tian, Lin

    2017-08-01

    Wigner-Ville distribution (WVD) is an important time-frequency analysis technology with a high energy distribution in seismic signal processing. However, it is interfered by many cross terms. To suppress the cross terms of the WVD and keep the concentration of its high energy distribution, an adaptive multi-directional filtering window in the ambiguity domain is proposed. This begins with the relationship of the Cohen distribution and the Gabor transform combining the greedy strategy and the rotational invariance property of the fractional Fourier transform in order to propose the multi-directional window, which extends the one-dimensional, one directional, optimal window function of the optimal fractional Gabor transform (OFrGT) to a two-dimensional, multi-directional window in the ambiguity domain. In this way, the multi-directional window matches the main auto terms of the WVD more precisely. Using the greedy strategy, the proposed window takes into account the optimal and other suboptimal directions, which also solves the problem of the OFrGT, called the local concentration phenomenon, when encountering a multi-component signal. Experiments on different types of both the signal models and the real seismic signals reveal that the proposed window can overcome the drawbacks of the WVD and the OFrGT mentioned above. Finally, the proposed method is applied to a seismic signal's spectral decomposition. The results show that the proposed method can explore the space distribution of a reservoir more precisely.

  4. Teaching multi-step math skills to adults with disabilities via video prompting.

    PubMed

    Kellems, Ryan O; Frandsen, Kaitlyn; Hansen, Blake; Gabrielsen, Terisa; Clarke, Brynn; Simons, Kalee; Clements, Kyle

    2016-11-01

    The purpose of this study was to evaluate the effectiveness of teaching multi-step math skills to nine adults with disabilities in an 18-21 post-high school transition program using a video prompting intervention package. The dependent variable was the percentage of steps completed correctly. The independent variable was the video prompting intervention, which involved several multi-step math calculation skills: (a) calculating a tip (15%), (b) calculating item unit prices, and (c) adjusting a recipe for more or fewer people. Results indicated a functional relationship between the video prompting interventions and prompting package and the percentage of steps completed correctly. 8 out of the 9 adults showed significant gains immediately after receiving the video prompting intervention.

  5. Region-based multi-step optic disk and cup segmentation from color fundus image

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan

    2013-02-01

    Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.

  6. Content-based image retrieval using greedy routing

    NASA Astrophysics Data System (ADS)

    Don, Anthony; Hanusse, Nicolas

    2008-01-01

    In this paper, we propose a new concept for browsing and searching in large collections of content-based indexed images. Our approach is inspired by greedy routing algorithms used in distributed networks. We define a navigation graph, called navgraph, whose vertices represent images. The edges of the navgraph are computed according to a similarity measure between indexed images. The resulting graph can be seen as an ad-hoc network of images in which a greedy routing algorithm can be applied for retrieval purposes. A request for a target image consists of a walk in the navigation graph using a greedy approach : starting from an arbitrary vertex/image, the neighbors of the current vertex are presented to the user, who iteratively selects the vertex which is the most similar to the target. We present the navgraph construction and prove its efficiency for greedy routing. We also propose a specific content-descriptor that we compare to the MPEG7 Color Layout Descriptor. Experimental results with test-users show the usability of this approach.

  7. Intrinsic Micromechanism of Multi-step Structural Transformation in MnNi Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Cui, Shushan; Wan, Jianfeng; Rong, Yonghua; Zhang, Jihua

    2017-03-01

    Simulation of the multi-step transformation of cubic matrix → multi-variant tetragonal domain → orthorhombic domain was realized by phase-field method. The intrinsic micromechanism of the second-step transformation in MnNi alloys was studied. It was found that the orthorhombic variant originated from the tetragonal variant with similar orientation, and bar-shaped orthorhombic phase firstly occurred around the interface of twinning bands. The second-step transformation resulted in localized variation of internal stress.

  8. Intrinsic Micromechanism of Multi-step Structural Transformation in MnNi Shape Memory Alloys

    NASA Astrophysics Data System (ADS)

    Cui, Shushan; Wan, Jianfeng; Rong, Yonghua; Zhang, Jihua

    2017-06-01

    Simulation of the multi-step transformation of cubic matrix → multi-variant tetragonal domain → orthorhombic domain was realized by phase-field method. The intrinsic micromechanism of the second-step transformation in MnNi alloys was studied. It was found that the orthorhombic variant originated from the tetragonal variant with similar orientation, and bar-shaped orthorhombic phase firstly occurred around the interface of twinning bands. The second-step transformation resulted in localized variation of internal stress.

  9. Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components

    NASA Technical Reports Server (NTRS)

    Jung-Kubiak, Cecile (Inventor); Reck, Theodore (Inventor); Chattopadhyay, Goutam (Inventor); Perez, Jose Vicente Siles (Inventor); Lin, Robert H. (Inventor); Mehdi, Imran (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Peralta, Alejandro (Inventor)

    2016-01-01

    A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.

  10. A multi-step peptidolytic cascade for amino acid recovery in chloroplasts.

    PubMed

    Teixeira, Pedro F; Kmiec, Beata; Branca, Rui M M; Murcha, Monika W; Byzia, Anna; Ivanova, Aneta; Whelan, James; Drag, Marcin; Lehtiö, Janne; Glaser, Elzbieta

    2017-01-01

    Plastids (including chloroplasts) are subcellular sites for a plethora of proteolytic reactions, required in functions ranging from protein biogenesis to quality control. Here we show that peptides generated from pre-protein maturation within chloroplasts of Arabidopsis thaliana are degraded to amino acids by a multi-step peptidolytic cascade consisting of oligopeptidases and aminopeptidases, effectively allowing the recovery of single amino acids within these organelles.

  11. Contaminant source and release history identification in groundwater: A multi-step approach

    NASA Astrophysics Data System (ADS)

    Gzyl, G.; Zanini, A.; Frączek, R.; Kura, K.

    2014-02-01

    The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study.

  12. Photon Production through Multi-step Processes Important in Nuclear Fluorescence Experiments

    SciTech Connect

    Hagmann, C; Pruet, J

    2006-10-26

    The authors present calculations describing the production of photons through multi-step processes occurring when a beam of gamma rays interacts with a macroscopic material. These processes involve the creation of energetic electrons through Compton scattering, photo-absorption and pair production, the subsequent scattering of these electrons, and the creation of energetic photons occurring as these electrons are slowed through Bremsstrahlung emission. Unlike single Compton collisions, during which an energetic photon that is scattered through a large angle loses most of its energy, these multi-step processes result in a sizable flux of energetic photons traveling at large angles relative to an incident photon beam. These multi-step processes are also a key background in experiments that measure nuclear resonance fluorescence by shining photons on a thin foil and observing the spectrum of back-scattered photons. Effective cross sections describing the production of backscattered photons are presented in a tabular form that allows simple estimates of backgrounds expected in a variety of experiments. Incident photons with energies between 0.5 MeV and 8 MeV are considered. These calculations of effective cross sections may be useful for those designing NRF experiments or systems that detect specific isotopes in well-shielded environments through observation of resonance fluorescence.

  13. Optimized multi-step NMR-crystallography approach for structural characterization of a stable quercetin solvate.

    PubMed

    Filip, Xenia; Miclaus, Maria; Martin, Flavia; Filip, Claudiu; Grosu, Ioana Georgeta

    2017-01-31

    Herein we report the preparation and solid state structural investigation of the 1,4-dioxane-quercetin solvate. NMR crystallography methods were employed for crystal structure determination of the solvate from microcrystalline powder. The stability of the compound relative to other reported quercetin solvates is discussed and found to be in perfect agreement with the hydrogen bonding networks/supra-molecular architectures formed in each case. It is also clearly shown that NMR crystallography represents an ideal analytical tool in such cases when hydrogen-bonding networks are required to be constrained at a high accuracy level.

  14. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-03-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  15. Identification of feedback loops in neural networks based on multi-step Granger causality.

    PubMed

    Dong, Chao-Yi; Shin, Dongkwan; Joo, Sunghoon; Nam, Yoonkey; Cho, Kwang-Hyun

    2012-08-15

    Feedback circuits are crucial network motifs, ubiquitously found in many intra- and inter-cellular regulatory networks, and also act as basic building blocks for inducing synchronized bursting behaviors in neural network dynamics. Therefore, the system-level identification of feedback circuits using time-series measurements is critical to understand the underlying regulatory mechanism of synchronized bursting behaviors. Multi-Step Granger Causality Method (MSGCM) was developed to identify feedback loops embedded in biological networks using time-series experimental measurements. Based on multivariate time-series analysis, MSGCM used a modified Wald test to infer the existence of multi-step Granger causality between a pair of network nodes. A significant bi-directional multi-step Granger causality between two nodes indicated the existence of a feedback loop. This new identification method resolved the drawback of the previous non-causal impulse response component method which was only applicable to networks containing no co-regulatory forward path. MSGCM also significantly improved the ratio of correct identification of feedback loops. In this study, the MSGCM was testified using synthetic pulsed neural network models and also in vitro cultured rat neural networks using multi-electrode array. As a result, we found a large number of feedback loops in the in vitro cultured neural networks with apparent synchronized oscillation, indicating a close relationship between synchronized oscillatory bursting behavior and underlying feedback loops. The MSGCM is an efficient method to investigate feedback loops embedded in in vitro cultured neural networks. The identified feedback loop motifs are considered as an important design principle responsible for the synchronized bursting behavior in neural networks.

  16. Contaminant source and release history identification in groundwater: a multi-step approach.

    PubMed

    Gzyl, G; Zanini, A; Frączek, R; Kura, K

    2014-02-01

    The paper presents a new multi-step approach aiming at source identification and release history estimation. The new approach consists of three steps: performing integral pumping tests, identifying sources, and recovering the release history by means of a geostatistical approach. The present paper shows the results obtained from the application of the approach within a complex case study in Poland in which several areal sources were identified. The investigated site is situated in the vicinity of a former chemical plant in southern Poland in the city of Jaworzno in the valley of the Wąwolnica River; the plant has been in operation since the First World War producing various chemicals. From an environmental point of view the most relevant activity was the production of pesticides, especially lindane. The application of the multi-step approach enabled a significant increase in the knowledge of contamination at the site. Some suspected contamination sources have been proven to have minor effect on the overall contamination. Other suspected sources have been proven to have key significance. Some areas not taken into consideration previously have now been identified as key sources. The method also enabled estimation of the magnitude of the sources and, a list of the priority reclamation actions will be drawn as a result. The multi-step approach has proven to be effective and may be applied to other complicated contamination cases. Moreover, the paper shows the capability of the geostatistical approach to manage a complex real case study. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Avoiding Greediness in Cooperative Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Brust, Matthias R.; Ribeiro, Carlos H. C.; Mesit, Jaruwan

    In peer-to-peer networks, peers simultaneously play the role of client and server. Since the introduction of the first file-sharing protocols, peer-to-peer networking currently causes more than 35% of all internet network traffic— with an ever increasing tendency. A common file-sharing protocol that occupies most of the peer-to-peer traffic is the BitTorrent protocol. Although based on cooperative principles, in practice it is doomed to fail if peers behave greedily. In this work-in-progress paper, we model the protocol by introducing the game named Tit-for-Tat Network Termination (T4TNT) that gives an interesting access to the greediness problem of the BitTorrent protocol. Simulations conducted under this model indicate that greediness can be reduced by solely manipulating the underlying peer-to-peer topology.

  18. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  19. Minimizing the total service time of discrete dynamic berth allocation problem by an iterated greedy heuristic.

    PubMed

    Lin, Shih-Wei; Ying, Kuo-Ching; Wan, Shu-Yen

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  20. Multi-step motion planning: Application to free-climbing robots

    NASA Astrophysics Data System (ADS)

    Bretl, Timothy Wolfe

    This dissertation addresses the problem of planning the motion of a multi-limbed robot to "free-climb" vertical rock surfaces. Free-climbing relies on natural features and friction (such as holes or protrusions) rather than special fixtures or tools. It requires strength, but more importantly it requires deliberate reasoning: not only must the robot decide how to adjust its posture to reach the next feature without falling, it must plan an entire sequence of steps, where each one might have future consequences. This process of reasoning is called multi-step planning. A multi-step planning framework is presented for computing non-gaited, free-climbing motions. This framework derives from an analysis of a free-climbing robot's configuration space, which can be decomposed into constraint manifolds associated with each state of contact between the robot and its environment. An understanding of the adjacency between manifolds motivates a two-stage strategy that uses a candidate sequence of steps to direct the subsequent search for motions. Three algorithms are developed to support the framework. The first algorithm reduces the amount of time required to plan each potential step, a large number of which must be considered over an entire multi-step search. It extends the probabilistic roadmap (PRM) approach based on an analysis of the interaction between balance and the topology of closed kinematic chains. The second algorithm addresses a problem with the PRM approach, that it is unable to distinguish challenging steps (which may be critical) from impossible ones. This algorithm detects impossible steps explicitly, using automated algebraic inference and machine learning. The third algorithm provides a fast constraint checker (on which the PRM approach depends), in particular a test of balance at the initially unknown number of sampled configurations associated with each step. It is a method of incremental precomputation, fast because it takes advantage of the sample

  1. PRE-ADAMO: a multi-step approach for the identification of life on Mars

    NASA Astrophysics Data System (ADS)

    Brucato, J. R.; Vázquez, L.; Rotundi, A.; Cataldo, F.; Palomba, E.; Saladino, R.; di Mauro, E.; Baratta, G.; Barbier, B.; Battaglia, R.; Colangeli, L.; Costanzo, G.; Crestini, C.; della Corte, V.; Mazzotta Epifani, E.; Esposito, F.; Ferrini, G.; Gómez Elvira, J.; Isola, M.; Keheyan, Y.; Leto, G.; Martinez Frias, J.; Mennella, V.; Negri, R.; Palumbo, M. E.; Palumbo, P.; Strazzulla, G.; Falciani, P.; Adami, G.; Guizzo, G. P.; Campiotti, S.

    2004-03-01

    It is of paramount importance to detect traces of life on Mars surface. Organic molecules are highly polar and if present on Mars require to be extracted from the dust sample, separated, concentrated, processed and analysed by an appropriate apparatus. PRE-ADAMO (PRebiotic Experiment - Activity of Dust And bioMolecules Observation) is a multi-steps approach for the identification of possible polar substances present on Mars. It was proposed as instrument of Pasteur payload for the ESA (European Space Agency) ExoMars rover mission. Main scientific objectives and experimental approach of PRE-ADAMO are here presented.

  2. Analysis of intrinsic coupling loss in multi-step index optical fibres.

    PubMed

    Aldabaldetreku, Gotzon; Durana, Gaizka; Zubia, Joseba; Arrue, Jon; Jiménez, Felipe; Mateo, Javier

    2005-05-02

    The main goal of the present paper is to provide a comprehensive analysis of the intrinsic coupling loss for multi-step index (MSI) fibres and compare it with those obtained for step- and graded-index fibres. We investigate the effects of tolerances to each waveguide parameter typical in standard manufacturing processes by carrying out several simulations using the ray-tracing method. The results obtained will serve us to identify the most critical waveguide variations to which fibre manufactures will have to pay closer attention to achieve lower coupling losses.

  3. Synergy between chemo- and bio-catalysts in multi-step transformations.

    PubMed

    Caiazzo, Aldo; Garcia, Paula M L; Wever, Ron; van Hest, Jan C M; Rowan, Alan E; Reek, Joost N H

    2009-07-21

    Cascade synthetic pathways, which allow multi-step conversions to take place in one reaction vessel, are crucial for the development of biomimetic, highly efficient new methods of chemical synthesis. Theoretically, the complexity introduced by combining processes could lead to an improvement of the overall process; however, it is the current general belief that it is more efficient to run processes separately. Inspired by natural cascade procedures we successfully combined a lipase catalyzed amidation with palladium catalyzed coupling reactions, simultaneously carried out on the same molecule. Unexpectedly, the bio- and chemo-catalyzed processes show synergistic behaviour, highlighting the complexity of multi-catalyst systems.

  4. Greedy Wavelet Projections are Bounded on BV (Preprint)

    DTIC Science & Technology

    2003-10-30

    functions of bounded variation on IRd with d ??? 2. Let ????, ?? ??? ??, be a wavelet basis of compactly supported functions normalized in BV, i.e...Wojtaszczyk October 30, 2003 Abstract Let BV = BV(IRd) be the space of functions of bounded variation on IRd with d ≥ 2. Let ψλ, λ ∈ ∆, be a wavelet basis...greedy approximation, functions of bounded variation , thresholding, bounded projections. 1 Introduction The space BV := BV(Ω) of functions of

  5. A multi-step method for material decomposition in spectral computed tomography

    NASA Astrophysics Data System (ADS)

    Fredette, Nathaniel R.; Lewis, Cale E.; Das, Mini

    2017-03-01

    When using a photon counting detector for material decomposition problems, a major issue is the low-count rate per energy bin which may lead to high image-noise with compromised contrast and accuracy. A multi-step algorithmic method of material decomposition is proposed for spectral computed tomography (CT), where the problem is formulated as series of simpler and dose efficient decompositions rather than solved simultaneously. A simple domain of four materials; water, hydroxyapatite, iodine and gold was explored. The results showed an improvement in accuracy with low-noise over a similar method where the materials were decomposed simultaneously. In the multi-step approach, for the same acquired energy bin data, the problem is reformulated in each step with decreasing number of energy bins (resulting in a higher count levels per bin) and unknowns in each step. This offers flexibility in the choice of energy bins for each material type. Our results are very preliminary but show promise and potential to tackle challenging decomposition tasks. Complete work will include detailed analysis of this approach and experimental data with more complex mixtures.

  6. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  7. A multi-step kinetic model for substrate assimilation and bacterial growth: application to benzene biodegradation.

    PubMed

    Bordel, S; Muñoz, R; Díaz, L F; Villaverde, S

    2007-08-01

    A multi-step kinetic model based on the concept of synthesizing unit (SU) was developed for describing benzene biodegradation in Pseudomonas putida F1. The model herein presented considered substrate arrival rates to the SU rather than concentrations, and provided a reasonable good fit of the dynamics of both catechol and biomass concentrations experimentally determined. It was based on very general assumptions and could be applied to any process accumulating metabolic intermediates. Conventional growth models considering a single step can be regarded as a particular case of this multi-step model. Despite the merits of this model, its applicability strongly depends on the knowledge of the complex induction-repression and inhibition mechanisms governing the different catabolic steps of the degradation pathway, which in most cases are difficult to elucidate experimentally and/or to model mathematically. In this particular case repression of benzene oxidation by catechol and self-inhibition of catechol transformation were experimentally confirmed and considered in the simulation, resulting in a good fit (relative average error of 6%) of the experimental data. (c) 2007 Wiley Periodicals, Inc.

  8. MS-DOCK: accurate multiple conformation generator and rigid docking protocol for multi-step virtual ligand screening.

    PubMed

    Sauton, Nicolas; Lagorce, David; Villoutreix, Bruno O; Miteva, Maria A

    2008-04-10

    The number of protein targets with a known or predicted tri-dimensional structure and of drug-like chemical compounds is growing rapidly and so is the need for new therapeutic compounds or chemical probes. Performing flexible structure-based virtual screening computations on thousands of targets with millions of molecules is intractable to most laboratories nor indeed desirable. Since shape complementarity is of primary importance for most protein-ligand interactions, we have developed a tool/protocol based on rigid-body docking to select compounds that fit well into binding sites. Here we present an efficient multiple conformation rigid-body docking approach, MS-DOCK, which is based on the program DOCK. This approach can be used as the first step of a multi-stage docking/scoring protocol. First, we developed and validated the Multiconf-DOCK tool that generates several conformers per input ligand. Then, each generated conformer (bioactives and 37970 decoys) was docked rigidly using DOCK6 with our optimized protocol into seven different receptor-binding sites. MS-DOCK was able to significantly reduce the size of the initial input library for all seven targets, thereby facilitating subsequent more CPU demanding flexible docking procedures. MS-DOCK can be easily used for the generation of multi-conformer libraries and for shape-based filtering within a multi-step structure-based screening protocol in order to shorten computation times.

  9. SMG: Fast scalable greedy algorithm for influence maximization in social networks

    NASA Astrophysics Data System (ADS)

    Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham

    2015-02-01

    Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.

  10. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear O (Nln2N) complexity, where N is the number of nodes in the network, independent of the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  11. Variation of nanopore diameter along porous anodic alumina channels by multi-step anodization.

    PubMed

    Lee, Kwang Hong; Lim, Xin Yuan; Wai, Kah Wing; Romanato, Filippo; Wong, Chee Cheong

    2011-02-01

    In order to form tapered nanocapillaries, we investigated a method to vary the nanopore diameter along the porous anodic alumina (PAA) channels using multi-step anodization. By anodizing the aluminum in either single acid (H3PO4) or multi-acid (H2SO4, oxalic acid and H3PO4) with increasing or decreasing voltage, the diameter of the nanopore along the PAA channel can be varied systematically corresponding to the applied voltages. The pore size along the channel can be enlarged or shrunken in the range of 20 nm to 200 nm. Structural engineering of the template along the film growth direction can be achieved by deliberately designing a suitable voltage and electrolyte together with anodization time.

  12. Conjugate symplecticity of second-order linear multi-step methods

    NASA Astrophysics Data System (ADS)

    Feng, Quan-Dong; Jiao, Yan-Dong; Tang, Yi-Fa

    2007-06-01

    We review the two different approaches for symplecticity of linear multi-step methods (LMSM) by Eirola and Sanz-Serna, Ge and Feng, and by Feng and Tang, Hairer and Leone, respectively, and give a numerical example between these two approaches. We prove that in the conjugate relation with and being LMSMs, if is symplectic, then the B-series error expansions of , and of the form are equal to those of trapezoid, mid-point and Euler forward schemes up to a parameter [theta] (completely the same when [theta]=1), respectively, this also partially solves a problem due to Hairer. In particular we indicate that the second-order symmetric leap-frog scheme Z2=Z0+2[tau]J-1[backward difference]H(Z1) cannot be conjugate-symplectic via another LMSM.

  13. The solution of Parrondo’s games with multi-step jumps

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2016-04-01

    We consider the general case of Parrondo’s games, when there is a finite probability to stay in the current state as well as multi-step jumps. We introduce a modification of the model: the transition probabilities between different games depend on the choice of the game in the previous round. We calculate the rate of capital growth as well as the variance of the distribution, following large deviation theory. The modified model allows higher capital growth rates than in standard Parrondo games for the range of parameters considered in the key articles about these games, and positive capital growth is possible for a much wider regime of parameters of the model.

  14. Real-Time Multi-Step View Reconstruction for a Virtual Teleconference System

    NASA Astrophysics Data System (ADS)

    Lei, B. J.; Hendriks, E. A.

    2002-12-01

    We propose a real-time multi-step view reconstruction algorithm and we tune its implementation to a virtual teleconference application. Theoretical motivations and practical implementation issues of the algorithm are detailed. The proposed algorithm can be used to reconstruct novel views at arbitrary poses (position and orientation) in a way that is geometrically valid. The algorithm is applied to a virtual teleconference system. In this system, we show that it can provide high-quality nearby virtual views that are comparable with the real perceived view. We experimentally show that, due to the modular approach, a real-time implementation is feasible. Finally, it is proved that it is possible to seamlessly integrate the proposed view reconstruction approach with other parts of the teleconference system. This integration can speed up the virtual view reconstruction.

  15. A Multi-Step Assessment Scheme for Seismic Network Site Selection in Densely Populated Areas

    NASA Astrophysics Data System (ADS)

    Plenkers, Katrin; Husen, Stephan; Kraft, Toni

    2015-10-01

    We developed a multi-step assessment scheme for improved site selection during seismic network installation in densely populated areas. Site selection is a complex process where different aspects (seismic background noise, geology, and financing) have to be taken into account. In order to improve this process, we developed a step-wise approach that allows quantifying the quality of a site by using, in addition to expert judgement and test measurements, two weighting functions as well as reference stations. Our approach ensures that the recording quality aimed for is reached and makes different sites quantitatively comparable to each other. Last but not least, it is an easy way to document the decision process, because all relevant parameters are listed, quantified, and weighted.

  16. Cross-cultural adaptation of instruments assessing breastfeeding determinants: a multi-step approach

    PubMed Central

    2014-01-01

    Background Cross-cultural adaptation is a necessary process to effectively use existing instruments in other cultural and language settings. The process of cross-culturally adapting, including translation, of existing instruments is considered a critical set to establishing a meaningful instrument for use in another setting. Using a multi-step approach is considered best practice in achieving cultural and semantic equivalence of the adapted version. We aimed to ensure the content validity of our instruments in the cultural context of KwaZulu-Natal, South Africa. Methods The Iowa Infant Feeding Attitudes Scale, Breastfeeding Self-Efficacy Scale-Short Form and additional items comprise our consolidated instrument, which was cross-culturally adapted utilizing a multi-step approach during August 2012. Cross-cultural adaptation was achieved through steps to maintain content validity and attain semantic equivalence in the target version. Specifically, Lynn’s recommendation to apply an item-level content validity index score was followed. The revised instrument was translated and back-translated. To ensure semantic equivalence, Brislin’s back-translation approach was utilized followed by the committee review to address any discrepancies that emerged from translation. Results Our consolidated instrument was adapted to be culturally relevant and translated to yield more reliable and valid results for use in our larger research study to measure infant feeding determinants effectively in our target cultural context. Conclusions Undertaking rigorous steps to effectively ensure cross-cultural adaptation increases our confidence that the conclusions we make based on our self-report instrument(s) will be stronger. In this way, our aim to achieve strong cross-cultural adaptation of our consolidated instruments was achieved while also providing a clear framework for other researchers choosing to utilize existing instruments for work in other cultural, geographic and population

  17. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    PubMed

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  18. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    PubMed Central

    Liu, Jingxian; Wu, Kefeng

    2017-01-01

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  19. Star sub-pixel centroid calculation based on multi-step minimum energy difference method

    NASA Astrophysics Data System (ADS)

    Wang, Duo; Han, YanLi; Sun, Tengfei

    2013-09-01

    The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better

  20. Influence of multi-step washing using Na2EDTA, oxalic acid and phosphoric acid on metal fractionation and spectroscopy characteristics from contaminated soil.

    PubMed

    Wei, Meng; Chen, Jiajun

    2016-11-01

    A multi-step soil washing test using a typical chelating agent (Na2EDTA), organic acid (oxalic acid), and inorganic weak acid (phosphoric acid) was conducted to remediate soil contaminated with heavy metals near an arsenic mining area. The aim of the test was to improve the heavy metal removal efficiency and investigate its influence on metal fractionation and the spectroscopy characteristics of contaminated soil. The results indicated that the orders of the multi-step washing were critical for the removal efficiencies of the metal fractions, bioavailability, and potential mobility due to the different dissolution levels of mineral fractions and the inter-transformation of metal fractions by XRD and FT-IR spectral analyses. The optimal soil washing options were identified as the Na2EDTA-phosphoric-oxalic acid (EPO) and phosphoric-oxalic acid-Na2EDTA (POE) sequences because of their high removal efficiencies (approximately 45 % for arsenic and 88 % for cadmium) and the minimal harmful effects that were determined by the mobility and bioavailability of the remaining heavy metals based on the metal stability (I R ) and modified redistribution index ([Formula: see text]).

  1. Near-Oracle Performance Guarantees for Greedy-Like Methods

    NASA Astrophysics Data System (ADS)

    Giryes, Raja; Elad, Michael

    2010-09-01

    In this paper analysis for Greedy-Like methods are presented. These methods include Subspace Pursuit (SP), Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Hard Thresholding (IHT) algorithms. The proposed analysis is based on the Restricted-Isometry-Property (RIP), establishing a near-oracle performance guarantee for each of these techniques. The signal is assumed to be corrupted by an additive random white Gaussian noise; and to have a K-sparse representation with respect to a known dictionary D. The results for the three algorithms are of the same type but uses different constants and different requirements on the cardinality of the sparse representation.

  2. Deep greedy learning under thermal variability in full diurnal cycles

    NASA Astrophysics Data System (ADS)

    Rauss, Patrick; Rosario, Dalton

    2017-08-01

    We study the generalization and scalability behavior of a deep belief network (DBN) applied to a challenging long-wave infrared hyperspectral dataset, consisting of radiance from several manmade and natural materials within a fixed site located 500 m from an observation tower. The collections cover multiple full diurnal cycles and include different atmospheric conditions. Using complementary priors, a DBN uses a greedy algorithm that can learn deep, directed belief networks one layer at a time and has two layers form to provide undirected associative memory. The greedy algorithm initializes a slower learning procedure, which fine-tunes the weights, using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of spectral data and their labels, despite significant data variability between and within classes due to environmental and temperature variation occurring within and between full diurnal cycles. We argue, however, that more questions than answers are raised regarding the generalization capacity of these deep nets through experiments aimed at investigating their training and augmented learning behavior.

  3. Multi-Step Bidirectional NDR Characteristics in Si/Si1-xGex/Si DHBTs and Their Temperature Dependence

    NASA Astrophysics Data System (ADS)

    Xu, D. X.; Shen, G. D.; Willander, M.; Hansson, G. V.

    1988-11-01

    Novel bidirectional negative differential resistance (NDR) phenomena have been observed at room temperature in strained base n-Si/p-Si1-xGex/n-Si double heterojunction bipolar transistors (DHBTs). A strong and symmetric bidirectional NDR modulated by base bias, together with a multi-step characteristic in collector current IC vs emitter-collector bias voltage VCE, was obtained in the devices with very thin base. The temperature dependence of the NDR and the multi-step IC-VCE characteristics has been measured to identify the possible transport mechanism. The physical origins of these phenomena are discussed.

  4. A Novel Molten Salt Reactor Concept to Implement the Multi-Step Time-Scheduled Transmutation Strategy

    SciTech Connect

    Csom, Gyula; Feher, Sandor; Szieberthj, Mate

    2002-07-01

    Nowadays the molten salt reactor (MSR) concept seems to revive as one of the most promising systems for the realization of transmutation. In the molten salt reactors and subcritical systems the fuel and material to be transmuted circulate dissolved in some molten salt. The main advantage of this reactor type is the possibility of the continuous feed and reprocessing of the fuel. In the present paper a novel molten salt reactor concept is introduced and its transmutation capabilities are studied. The goal is the development of a transmutation technique along with a device implementing it, which yield higher transmutation efficiencies than that of the known procedures and thus results in radioactive waste whose load on the environment is reduced both in magnitude and time length. The procedure is the multi-step time-scheduled transmutation, in which transformation is done in several consecutive steps of different neutron flux and spectrum. In the new MSR concept, named 'multi-region' MSR (MRMSR), the primary circuit is made up of a few separate loops, in which salt-fuel mixtures of different compositions are circulated. The loop sections constituting the core region are only neutronically and thermally coupled. This new concept makes possible the utilization of the spatial dependence of spectrum as well as the advantageous features of liquid fuel such as the possibility of continuous chemical processing etc. In order to compare a 'conventional' MSR and a proposed MRMSR in terms of efficiency, preliminary calculational results are shown. Further calculations in order to find the optimal implementation of this new concept and to emphasize its other advantageous features are going on. (authors)

  5. A multi-step transversal linearization (MTL) method in non-linear structural dynamics

    NASA Astrophysics Data System (ADS)

    Roy, D.; Kumar, Rajesh

    2005-10-01

    An implicit family of multi-step transversal linearization (MTL) methods is proposed for efficient and numerically stable integration of nonlinear oscillators of interest in structural dynamics. The presently developed method is a multi-step extension and further generalization of the locally transversal linearization (LTL) method proposed earlier by Roy (Proceedings of the Academy of the Royal Society (London) 457 (2001) 539-566), Roy and Ramachandra (Journal of Sound and Vibration 41 (2001a) 653-679, International Journal for Numerical Methods in Engineering 51 (2001b) 203-224) and Roy (International Journal of Numerical Methods in Engineering 61 (2004) 764). The MTL-based linearization is achieved through a non-unique replacement of the nonlinear part of the vector field by a conditionally linear interpolating expansion of known accuracy, whose coefficients contain the discretized state variables defined at a set of grid points. In the process, the nonlinear part of the vector field becomes a conditionally determinable equivalent forcing function. The MTL-based linearized differential equations thus become explicitly integrable. Based on the linearized solution, a set of algebraic, constraint equations are so formed that transversal intersections of the linearized and nonlinearized solution manifolds occur at the multiple grid points. The discretized state vectors are thus found as the zeros of the constraint equations. Simple error estimates for the displacement and velocity vectors are provided and, in particular, it is shown that the formal accuracy of the MTL methods as a function of the time step-size depends only on the error of replacement of the nonlinear part of the vector field. Presently, only two different polynomial-based interpolation schemes are employed for transversal linearization, viz. the Taylor-like interpolation and the Lagrangian interpolation. While the Taylor-like interpolation leads to numerical ill-conditioning as the order of

  6. Multi-step process for concentrating magnetic particles in waste sludges

    DOEpatents

    Watson, J.L.

    1990-07-10

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.

  7. Multi-step process for concentrating magnetic particles in waste sludges

    DOEpatents

    Watson, John L.

    1990-01-01

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed.

  8. Mid-infrared supercontinuum generation in chalcogenide multi-step index fibers with normal chromatic dispersion

    NASA Astrophysics Data System (ADS)

    Nagasaka, K.; Tong, Hoang Tuan; Liu, Lai; Matsumoto, Morio; Tezuka, Hiroshige; Suzuki, Takenobu; Ohishi, Yasutake

    2017-02-01

    We experimentally demonstrate mid-infrared supercontinuum (SC) generation in chalcogenide multi-step index fibers (MSIF) pumped by a femtosecond laser. The fabricated chalcogenide MSIF is composed of a high refractive index core (C1) in the center, which is enclosed by a lower refractive index core layer (C2) and an outer cladding. This fiber structure is advantageous to tailor the chromatic dispersion with higher freedom and to keep the effective mode area small at long wavelengths. The high refractive index core, low refractive index core, and the outer cladding materials are As2Se3, AsSe2 and As2S5, respectively. When the diameter of C1 and C2 are 7.8 and 30 μm, respectively, the zerodispersion wavelength (ZDW) of the fiber is 12.5 μm. The chromatic dispersion profile is near-zero and flattened within the range of +/-20 ps/km/nm in the wavelength range from 4 to 17 μm and a broad normal dispersion region is obtained in the wavelength range shorter than the ZDW. In practice, a 2.8 cm long fiber is pumped at 10 μm by using a femtosecond laser whose pulse width is 200 fs. The SC generation extending from 2 to 14 μm is obtained. Most of its spectrum is in the normal dispersion region of the fiber. These results are promising for the highly coherent mid-infrared SC generation.

  9. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    SciTech Connect

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.

  10. Complex network analysis of brain functional connectivity under a multi-step cognitive task

    NASA Astrophysics Data System (ADS)

    Cai, Shi-Min; Chen, Wei; Liu, Dong-Bai; Tang, Ming; Chen, Xun

    2017-01-01

    Functional brain network has been widely studied to understand the relationship between brain organization and behavior. In this paper, we aim to explore the functional connectivity of brain network under a multi-step cognitive task involving consecutive behaviors, and further understand the effect of behaviors on the brain organization. The functional brain networks are constructed based on a high spatial and temporal resolution fMRI dataset and analyzed via complex network based approach. We find that at voxel level the functional brain network shows robust small-worldness and scale-free characteristics, while its assortativity and rich-club organization are slightly restricted to the order of behaviors performed. More interestingly, the functional connectivity of brain network in activated ROIs strongly correlates with behaviors and is obviously restricted to the order of behaviors performed. These empirical results suggest that the brain organization has the generic properties of small-worldness and scale-free characteristics, and its diverse functional connectivity emerging from activated ROIs is strongly driven by these behavioral activities via the plasticity of brain.

  11. Automating multi-step paper-based assays using integrated layering of reagents.

    PubMed

    Jahanshahi-Anbuhi, Sana; Kannan, Balamurali; Pennings, Kevin; Monsur Ali, M; Leung, Vincent; Giang, Karen; Wang, Jingyun; White, Dawn; Li, Yingfu; Pelton, Robert H; Brennan, John D; Filipe, Carlos D M

    2017-02-28

    We describe a versatile and simple method to perform sequential reactions on paper analytical devices by stacking dry pullulan films on paper, where each film contains one or more reagents or acts as a delay layer. Exposing the films to an aqueous solution of the analyte leads to sequential dissolution of the films in a temporally controlled manner followed by diffusive mixing of the reagents, so that sequential reactions can be performed. The films can be easily arranged for lateral flow assays or for spot tests (reactions take place sequentially in the z-direction). We have tested the general feasibility of the approach using three different model systems to demonstrate different capabilities: 1) pH ramping from low to high and high to low to demonstrate timing control; 2) rapid ready-to-use two-step Simon's assays on paper for detection of drugs of abuse utilizing a 2-layer stack containing two different reagents to demonstrate the ability to perform assays in the z-direction; and 3) sequential cell lysing and colorimetric detection of an intracellular bacterial enzyme, to demonstrate the ability of the method to perform sample preparation and analysis in the form of a spot assay. Overall, these studies demonstrate the potential of stacked pullulan films as useful components to enable multi-step assays on simple paper-based devices.

  12. Michaelis-Menten kinetics in shear flow: Similarity solutions for multi-step reactions.

    PubMed

    Ristenpart, W D; Stone, H A

    2012-03-01

    Models for chemical reaction kinetics typically assume well-mixed conditions, in which chemical compositions change in time but are uniform in space. In contrast, many biological and microfluidic systems of interest involve non-uniform flows where gradients in flow velocity dynamically alter the effective reaction volume. Here, we present a theoretical framework for characterizing multi-step reactions that occur when an enzyme or enzymatic substrate is released from a flat solid surface into a linear shear flow. Similarity solutions are developed for situations where the reactions are sufficiently slow compared to a convective time scale, allowing a regular perturbation approach to be employed. For the specific case of Michaelis-Menten reactions, we establish that the transversally averaged concentration of product scales with the distance x downstream as x(5/3). We generalize the analysis to n-step reactions, and we discuss the implications for designing new microfluidic kinetic assays to probe the effect of flow on biochemical processes.

  13. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE PAGES

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; ...

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  14. Exact free vibration of multi-step Timoshenko beam system with several attachments

    NASA Astrophysics Data System (ADS)

    Farghaly, S. H.; El-Sayed, T. A.

    2016-05-01

    This paper deals with the analysis of the natural frequencies, mode shapes of an axially loaded multi-step Timoshenko beam combined system carrying several attachments. The influence of system design and the proposed sub-system non-dimensional parameters on the combined system characteristics are the major part of this investigation. The effect of material properties, rotary inertia and shear deformation of the beam system for each span are included. The end masses are elastically supported against rotation and translation at an offset point from the point of attachment. A sub-system having two degrees of freedom is located at the beam ends and at any of the intermediate stations and acts as a support and/or a suspension. The boundary conditions of the ordinary differential equation governing the lateral deflections and slope due to bending of the beam system including the shear force term, due to the sub-system, have been formulated. Exact global coefficient matrices for the combined modal frequencies, the modal shape and for the discrete sub-system have been derived. Based on these formulae, detailed parametric studies of the combined system are carried out. The applied mathematical model is valid for wide range of applications especially in mechanical, naval and structural engineering fields.

  15. Stochastic online appointment scheduling of multi-step sequential procedures in nuclear medicine.

    PubMed

    Pérez, Eduardo; Ntaimo, Lewis; Malavé, César O; Bailey, Carla; McCormack, Peter

    2013-12-01

    The increased demand for medical diagnosis procedures has been recognized as one of the contributors to the rise of health care costs in the U.S. in the last few years. Nuclear medicine is a subspecialty of radiology that uses advanced technology and radiopharmaceuticals for the diagnosis and treatment of medical conditions. Procedures in nuclear medicine require the use of radiopharmaceuticals, are multi-step, and have to be performed under strict time window constraints. These characteristics make the scheduling of patients and resources in nuclear medicine challenging. In this work, we derive a stochastic online scheduling algorithm for patient and resource scheduling in nuclear medicine departments which take into account the time constraints imposed by the decay of the radiopharmaceuticals and the stochastic nature of the system when scheduling patients. We report on a computational study of the new methodology applied to a real clinic. We use both patient and clinic performance measures in our study. The results show that the new method schedules about 600 more patients per year on average than a scheduling policy that was used in practice by improving the way limited resources are managed at the clinic. The new methodology finds the best start time and resources to be used for each appointment. Furthermore, the new method decreases patient waiting time for an appointment by about two days on average.

  16. Michaelis-Menten kinetics in shear flow: Similarity solutions for multi-step reactions

    PubMed Central

    Ristenpart, W. D.; Stone, H. A.

    2012-01-01

    Models for chemical reaction kinetics typically assume well-mixed conditions, in which chemical compositions change in time but are uniform in space. In contrast, many biological and microfluidic systems of interest involve non-uniform flows where gradients in flow velocity dynamically alter the effective reaction volume. Here, we present a theoretical framework for characterizing multi-step reactions that occur when an enzyme or enzymatic substrate is released from a flat solid surface into a linear shear flow. Similarity solutions are developed for situations where the reactions are sufficiently slow compared to a convective time scale, allowing a regular perturbation approach to be employed. For the specific case of Michaelis-Menten reactions, we establish that the transversally averaged concentration of product scales with the distance x downstream as x5/3. We generalize the analysis to n-step reactions, and we discuss the implications for designing new microfluidic kinetic assays to probe the effect of flow on biochemical processes. PMID:22662093

  17. Multi-step sequential batch two-phase anaerobic composting of food waste.

    PubMed

    Shin, H S; Han, S K; Song, Y C; Lee, C Y

    2001-03-01

    This study was conducted to evaluate the newly devised process, called MUlti-step Sequential batch Two-phase Anaerobic Composting (MUSTAC). The MUSTAC process consisted of several leaching beds for hydrolysis, acidification and post-treatment, and a UASB reactor for methane recovery. This process to treat food waste was developed with a high-rate anaerobic composting technique based on the rate-limiting step approach. Rumen microorganisms were inoculated to improve the low efficiency of acidogenic fermentation. Both two-phase anaerobic digestion and sequential batch operation were used to control environmental constraints in anaerobic degradation. The MUSTAC process demonstrated excellent performance as it resulted in a large reduction in volatile solids (VS) (84.7%) and high methane conversion efficiency (84.4%) at high organic loading rates (10.8 kg VS m(-3) d(-1)) in a short SRT (10 days). Methane yield was 0.27 m3 kg(-1) VS, while methane gas production rate was 2.27 m3 m(-3) d(-1). The output from the post-treatment could be used as a soil amendment, which was produced at the same acidogenic fermenter without troublesome moving. The main advantages of the MUSTAC process were simple operation and high efficiency. The MUSTAC process proved stable, reliable and effective in resource recovery as well as waste stabilization.

  18. Self-Regulated Strategy Development Instruction for Teaching Multi-Step Equations to Middle School Students Struggling in Math

    ERIC Educational Resources Information Center

    Cuenca-Carlino, Yojanna; Freeman-Green, Shaqwana; Stephenson, Grant W.; Hauth, Clara

    2016-01-01

    Six middle school students identified as having a specific learning disability or at risk for mathematical difficulties were taught how to solve multi-step equations by using the self-regulated strategy development (SRSD) model of instruction. A multiple-probe-across-pairs design was used to evaluate instructional effects. Instruction was provided…

  19. A Greedy Double Auction Mechanism for Grid Resource Allocation

    NASA Astrophysics Data System (ADS)

    Ding, Ding; Luo, Siwei; Gao, Zhan

    To improve the resource utilization and satisfy more users, a Greedy Double Auction Mechanism(GDAM) is proposed to allocate resources in grid environments. GDAM trades resources at discriminatory price instead of uniform price, reflecting the variance in requirements for profits and quantities. Moreover, GDAM applies different auction rules to different cases, over-demand, over-supply and equilibrium of demand and supply. As a new mechanism for grid resource allocation, GDAM is proved to be strategy-proof, economically efficient, weakly budget-balanced and individual rational. Simulation results also confirm that GDAM outperforms the traditional one on both the total trade amount and the user satisfaction percentage, specially as more users are involved in the auction market.

  20. Greedy Successive Anchorization for Localizing Machine Type Communication Devices

    PubMed Central

    Imtiaz Ul Haq, Mian; Kim, Dongwoo

    2016-01-01

    Localization of machine type communication (MTC) devices is essential for various types of location-based applications. In this paper, we investigate a distributed localization problem in noisy networks, where an estimated position of blind MTC machines (BMs) is obtained by using noisy measurements of distance between BM and anchor machines (AMs). We allow positioned BMs also to work as anchors that are referred to as virtual AMs (VAMs) in this paper. VAMs usually have greater position errors than (original) AMs, and, if used as anchors, the error propagates through the whole network. However, VAMs are necessary, especially when many BMs are distributed in a large area with an insufficient number of AMs. To overcome the error propagation, we propose a greedy successive anchorization process (GSAP). A round of GSAP consists of consecutive two steps. In the first step, a greedy selection of anchors among AMs and VAMs is done by which GSAP considers only those three anchors that possibly pertain to the localization accuracy. In the second step, each BM that can select three anchors in its neighbor determines its location with a proposed distributed localization algorithm. Iterative rounds of GSAP terminate when every BM in the network finds its location. To examine the performance of GSAP, a root mean square error (RMSE) metric is used and the corresponding Cramér–Rao lower bound (CRLB) is provided. By numerical investigation, RMSE performance of GSAP is shown to be better than existing localization methods with and without an anchor selection method and mostly close to the CRLB. PMID:27983576

  1. Multi-layered greedy network-growing algorithm: extension of greedy network-growing algorithm to multi-layered networks.

    PubMed

    Kamimura, Ryotaro

    2004-02-01

    In this paper, we extend our greedy network-growing algorithm to multi-layered networks. With multi-layered networks, we can solve many complex problems that single-layered networks fail to solve. In addition, the network-growing algorithm is used in conjunction with teacher-directed learning that produces appropriate outputs without computing errors between targets and outputs. Thus, the present algorithm is a very efficient network-growing algorithm. The new algorithm was applied to three problems: the famous vertical-horizontal lines detection problem, a medical data problem and a road classification problem. In all these cases, experimental results confirmed that the method could solve problems that single-layered networks failed to. In addition, information maximization makes it possible to extract salient features in input patterns.

  2. Mouse Embryonic Stem Cells Inhibit Murine Cytomegalovirus Infection through a Multi-Step Process

    PubMed Central

    Kawasaki, Hideya; Kosugi, Isao; Arai, Yoshifumi; Iwashita, Toshihide; Tsutsui, Yoshihiro

    2011-01-01

    In humans, cytomegalovirus (CMV) is the most significant infectious cause of intrauterine infections that cause congenital anomalies of the central nervous system. Currently, it is not known how this process is affected by the timing of infection and the susceptibility of early-gestational-period cells. Embryonic stem (ES) cells are more resistant to CMV than most other cell types, although the mechanism responsible for this resistance is not well understood. Using a plaque assay and evaluation of immediate-early 1 mRNA and protein expression, we found that mouse ES cells were resistant to murine CMV (MCMV) at the point of transcription. In ES cells infected with MCMV, treatment with forskolin and trichostatin A did not confer full permissiveness to MCMV. In ES cultures infected with elongation factor-1α (EF-1α) promoter-green fluorescent protein (GFP) recombinant MCMV at a multiplicity of infection of 10, less than 5% of cells were GFP-positive, despite the fact that ES cells have relatively high EF-1α promoter activity. Quantitative PCR analysis of the MCMV genome showed that ES cells allow approximately 20-fold less MCMV DNA to enter the nucleus than mouse embryonic fibroblasts (MEFs) do, and that this inhibition occurs in a multi-step manner. In situ hybridization revealed that ES cell nuclei have significantly less MCMV DNA than MEF nuclei. This appears to be facilitated by the fact that ES cells express less heparan sulfate, β1 integrin, and vimentin, and have fewer nuclear pores, than MEF. This may reduce the ability of MCMV to attach to and enter through the cellular membrane, translocate to the nucleus, and cross the nuclear membrane in pluripotent stem cells (ES/induced pluripotent stem cells). The results presented here provide perspective on the relationship between CMV susceptibility and cell differentiation. PMID:21407806

  3. Stable hydrogen isotopic analysis of nanomolar molecular hydrogen by automatic multi-step gas chromatographic separation.

    PubMed

    Komatsu, Daisuke D; Tsunogai, Urumu; Kamimura, Kanae; Konno, Uta; Ishimura, Toyoho; Nakagawa, Fumiko

    2011-11-15

    We have developed a new automated analytical system that employs a continuous flow isotope ratio mass spectrometer to determine the stable hydrogen isotopic composition (δD) of nanomolar quantities of molecular hydrogen (H(2)) in an air sample. This method improves previous methods to attain simpler and lower-cost analyses, especially by avoiding the use of expensive or special devices, such as a Toepler pump, a cryogenic refrigerator, and a special evacuation system to keep the temperature of a coolant under reduced pressure. Instead, the system allows H(2) purification from the air matrix via automatic multi-step gas chromatographic separation using the coolants of both liquid nitrogen (77 K) and liquid nitrogen + ethanol (158 K) under 1 atm pressure. The analytical precision of the δD determination using the developed method was better than 4‰ for >5 nmol injections (250 mL STP for 500 ppbv air sample) and better than 15‰ for 1 nmol injections, regardless of the δD value, within 1 h for one sample analysis. Using the developed system, the δD values of H(2) can be quantified for atmospheric samples as well as samples of representative sources and sinks including those containing small quantities of H(2) , such as H(2) in soil pores or aqueous environments, for which there is currently little δD data available. As an example of such trace H(2) analyses, we report here the isotope fractionations during H(2) uptake by soils in a static chamber. The δD values of H(2) in these H(2)-depleted environments can be useful in constraining the budgets of atmospheric H(2) by applying an isotope mass balance model.

  4. Two- and multi-step annealing of cereal starches in relation to gelatinization.

    PubMed

    Shi, Yong-Cheng

    2008-02-13

    Two- and multi-step annealing experiments were designed to determine how much gelatinization temperature of waxy rice, waxy barley, and wheat starches could be increased without causing a decrease in gelatinization enthalpy or a decline in X-ray crystallinity. A mixture of starch and excess water was heated in a differential scanning calorimeter (DSC) pan to a specific temperature and maintained there for 0.5-48 h. The experimental approach was first to anneal a starch at a low temperature so that the gelatinization temperature of the starch was increased without causing a decrease in gelatinization enthalpy. The annealing temperature was then raised, but still was kept below the onset gelatinization temperature of the previously annealed starch. When a second- or third-step annealing temperature was high enough, it caused a decrease in crystallinity, even though the holding temperature remained below the onset gelatinization temperature of the previously annealed starch. These results support that gelatinization is a nonequilibrium process and that dissociation of double helices is driven by the swelling of amorphous regions. Small-scale starch slurry annealing was also performed and confirmed the annealing results conducted in DSC pans. A three-phase model of a starch granule, a mobile amorphous phase, a rigid amorphous phase, and a crystalline phase, was used to interpret the annealing results. Annealing seems to be an interplay between a more efficient packing of crystallites in starch granules and swelling of plasticized amorphous regions. There is always a temperature ceiling that can be used to anneal a starch without causing a decrease in crystallinity. That temperature ceiling is starch-specific, dependent on the structure of a starch, and is lower than the original onset gelatinization of a starch.

  5. Comparability of river quality assessment using macrophytes: a multi-step procedure to overcome biogeographical differences.

    PubMed

    Aguiar, F C; Segurado, P; Urbanič, G; Cambra, J; Chauvin, C; Ciadamidaro, S; Dörflinger, G; Ferreira, J; Germ, M; Manolaki, P; Minciardi, M R; Munné, A; Papastergiadou, E; Ferreira, M T

    2014-04-01

    This paper exposes a new methodological approach to solve the problem of intercalibrating river quality national methods when a common metric is lacking and most of the countries share the same Water Framework Directive (WFD) assessment method. We provide recommendations for similar works in future concerning the assessment of ecological accuracy and highlight the importance of a good common ground to make feasible the scientific work beyond the intercalibration. The approach herein presented was applied to highly seasonal rivers of the Mediterranean Geographical Intercalibration Group for the Biological Quality Element Macrophytes. The Mediterranean Group of river macrophytes involved seven countries and two assessment methods with similar acquisition data and assessment concept: the Macrophyte Biological Index for Rivers (IBMR) for Cyprus, France, Greece, Italy, Portugal and Spain, and the River Macrophyte Index (RMI) for Slovenia. Database included 318 sites of which 78 were considered as benchmarks. The boundary harmonization was performed for common WFD-assessment methods (all countries except Slovenia) using the median of the Good/Moderate and High/Good boundaries of all countries. Then, whenever possible, the Slovenian method, RMI was computed for the entire database. The IBMR was also computed for the Slovenian sites and was regressed against RMI in order to check the relatedness of methods (R(2)=0.45; p<0.00001) and to convert RMI boundaries into the IBMR scale. The boundary bias of RMI was computed using direct comparison of classification and the median boundary values following boundary harmonization. The average absolute class differences after harmonization is 26% and the percentage of classifications differing by half of a quality class is also small (16.4%). This multi-step approach to the intercalibration was endorsed by the WFD Regulatory Committee. © 2013 Elsevier B.V. All rights reserved.

  6. Detection of Heterogeneous Small Inclusions by a Multi-Step MUSIC Method

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Dell'Aversano, Angela; Leone, Giovanni

    2014-05-01

    In this contribution the problem of detecting and localizing scatterers with small (in terms of wavelength) cross sections by collecting their scattered field is addressed. The problem is dealt with for a two-dimensional and scalar configuration where the background is given as a two-layered cylindrical medium. More in detail, while scattered field data are taken in the outermost layer, inclusions are embedded within the inner layer. Moreover, the case of heterogeneous inclusions (i.e., having different scattering coefficients) is addressed. As a pertinent applicative context we identify the problem of diagnose concrete pillars in order to detect and locate rebars, ducts and other small in-homogeneities that can populate the interior of the pillar. The nature of inclusions influences the scattering coefficients. For example, the field scattered by rebars is stronger than the one due to ducts. Accordingly, it is expected that the more weakly scattering inclusions can be difficult to be detected as their scattered fields tend to be overwhelmed by those of strong scatterers. In order to circumvent this problem, in this contribution a multi-step MUltiple SIgnal Classification (MUSIC) detection algorithm is adopted [1]. In particular, the first stage aims at detecting rebars. Once rebars have been detected, their positions are exploited to update the Green's function and to subtract the scattered field due to their presence. The procedure is repeated until all the inclusions are detected. The analysis is conducted by numerical experiments for a multi-view/multi-static single-frequency configuration and the synthetic data are generated by a FDTD forward solver. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." [1] R. Solimene, A. Dell'Aversano and G. Leone, "MUSIC algorithms for rebar detection," J. of Geophysics and Engineering, vol. 10, pp. 1

  7. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 3: The GREEDY algorithm

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The functional specifications, functional design and flow, and the program logic of the GREEDY computer program are described. The GREEDY program is a submodule of the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) program and has been designed as a continuation of the shuttle Mission Payloads (MPLS) program. The MPLS uses input payload data to form a set of feasible payload combinations; from these, GREEDY selects a subset of combinations (a traffic model) so all payloads can be included without redundancy. The program also provides the user a tutorial option so that he can choose an alternate traffic model in case a particular traffic model is unacceptable.

  8. Surface Modified Particles By Multi-Step Michael-Type Addition And Process For The Preparation Thereof

    DOEpatents

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2005-05-03

    A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.

  9. Comparison of surface roughness of nanofilled and nanohybrid composite resins after polishing with a multi-step technique

    NASA Astrophysics Data System (ADS)

    Itanto, B. S. H.; Usman, M.; Margono, A.

    2017-08-01

    To compare the surface roughness of nanofilled and nanohybrid composite resins after polishing using a multi-step technique. 40 composite resin specimens were divided into two groups (20 nanofilled specimens using Filtek Z350 XT [group A] and 20 nanohybrid specimens using Filtek Z250 XT [group B]), prepared, and then polished. After immersion in artificial saliva for 24 hours, the surface roughness was measured with a surface roughness tester. The mean surface roughness results along with the standard deviation of group A were 0.0967 μm ± 0.0174, while the results of group B were 0.1217 μm ± 0.0244. Statistically (with p = 0.05), there were significant differences between each group. The surface roughness of a nanofilled composite resin after polishing with a multi-step technique is better than that of a nanohybrid composite resin.

  10. Method to Improve Indium Bump Bonding via Indium Oxide Removal Using a Multi-Step Plasma Process

    NASA Technical Reports Server (NTRS)

    Greer, H. Frank (Inventor); Jones, Todd J. (Inventor); Vasquez, Richard P. (Inventor); Hoenk, Michael E. (Inventor); Dickie, Matthew R. (Inventor); Nikzad, Shouleh (Inventor)

    2012-01-01

    A process for removing indium oxide from indium bumps in a flip-chip structure to reduce contact resistance, by a multi-step plasma treatment. A first plasma treatment of the indium bumps with an argon, methane and hydrogen plasma reduces indium oxide, and a second plasma treatment with an argon and hydrogen plasma removes residual organics. The multi-step plasma process for removing indium oxide from the indium bumps is more effective in reducing the oxide, and yet does not require the use of halogens, does not change the bump morphology, does not attack the bond pad material or under-bump metallization layers, and creates no new mechanisms for open circuits.

  11. Automated multi-step purification protocol for Angiotensin-I-Converting-Enzyme (ACE).

    PubMed

    Eisele, Thomas; Stressler, Timo; Kranz, Bertolt; Fischer, Lutz

    2012-12-12

    Highly purified proteins are essential for the investigation of the functional and biochemical properties of proteins. The purification of a protein requires several steps, which are often time-consuming. In our study, the Angiotensin-I-Converting-Enzyme (ACE; EC 3.4.15.1) was solubilised from pig lung without additional detergents, which are commonly used, under mild alkaline conditions in a Tris-HCl buffer (50mM, pH 9.0) for 48h. An automation of the ACE purification was performed using a multi-step protocol in less than 8h, resulting in a purified protein with a specific activity of 37Umg(-1) (purification factor 308) and a yield of 23.6%. The automated ACE purification used an ordinary fast-protein-liquid-chromatography (FPLC) system equipped with two additional switching valves. These switching valves were needed for the buffer stream inversion and for the connection of the Superloop™ used for the protein parking. Automated ACE purification was performed using four combined chromatography steps, including two desalting procedures. The purification methods contained two hydrophobic interaction chromatography steps, a Cibacron 3FG-A chromatography step and a strong anion exchange chromatography step. The purified ACE was characterised by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and native-PAGE. The estimated monomer size of the purified glycosylated ACE was determined to be ∼175kDa by SDS-PAGE, with the dimeric form at ∼330kDa as characterised by a native PAGE using a novel activity staining protocol. For the activity staining, the tripeptide l-Phe-Gly-Gly was used as the substrate. The ACE cleaved the dipeptide Gly-Gly, releasing the l-Phe to be oxidised with l-amino acid oxidase. Combined with peroxidase and o-dianisidine, the generated H(2)O(2) stained a brown coloured band. This automated purification protocol can be easily adapted to be used with other protein purification tasks. Copyright © 2012 Elsevier B.V. All rights

  12. Hippocampal-prefrontal theta phase synchrony in planning of multi-step actions based on memory retrieval.

    PubMed

    Ishino, Seiya; Takahashi, Susumu; Ogawa, Masaaki; Sakurai, Yoshio

    2017-02-23

    Planning of multi-step actions based on the retrieval of acquired information is essential for efficient foraging. The hippocampus (HPC) and prefrontal cortex (PFC) may play critical roles in this process. However, in rodents, many studies investigating such roles utilized T-maze tasks that only require one-step actions (i.e., selection of one of two alternatives), in which memory retrieval and selection of an action based on the retrieval cannot be clearly differentiated. In monkeys, PFC has been suggested to be involved in planning of multi-step actions; however, the synchrony between HPC and PFC has not been evaluated. To address the combined role of the regions in planning of multi-step actions, we introduced a task in rats that required three successive nose-poke responses to three sequentially illuminated nose-poke holes. During the task, local field potentials (LFP) and spikes from hippocampal CA1 and medial PFC (mPFC) were simultaneously recorded. The position of the first hole indicated whether the following two holes would be presented in a predictable sequence or not. During the first nose-poke period, phase synchrony of LFPs in the theta range (4-10 Hz) between the regions was not different between predictable and unpredictable trials. However, only in trials of predictable sequences, the magnitude of theta phase synchrony during the first nose-poke period was negatively correlated with latency of the two-step ahead nose-poke response. Our findings point to the HPC-mPFC theta phase synchrony as a key mechanism underlying planning of multi-step actions based on memory retrieval rather than the retrieval itself. This article is protected by copyright. All rights reserved.

  13. Multi-Stepping Solution to Linear Two Point Boundary Value Problems in Missile Integrated Control

    DTIC Science & Technology

    2005-08-01

    Control S. S. Vaddi*, P. K. Menon†, and G. D. Sweriduk‡ Optimal Synthesis Inc., Palo Alto, CA, 94303 and E. J. Ohlmeyer§ Naval Surface Warfare...Research Scientist, Optimal Synthesis Inc., 868 San-Antonio Road, Palo Alto, CA, 94303 † Chief...Scientist, Optimal Synthesis Inc. Associate Fellow, AIAA. ‡ Research Scientist, Optimal Synthesis Inc., Senior Member § Senior Guidance and Control

  14. Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem

    PubMed Central

    2014-01-01

    Background Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Results Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. Conclusion As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with

  15. Multi-steps infrared spectroscopic characterization of the effect of flowering on medicinal value of Cistanche tubulosa

    NASA Astrophysics Data System (ADS)

    Lai, Zuliang; Xu, Peng; Wu, Peiyi

    2009-01-01

    Multi-steps infrared spectroscopic methods, including conventional Fourier transform infrared spectroscopy (FT-IR), second derivative spectroscopy and two-dimensional infrared (2D-IR) correlation spectroscopy, have been proved to be effective methods to examine complicated mixture system such as Chinese herbal medicine. The focus of this paper is the investigation on the effect of flowering on the pharmaceutical components of Cistanche tubulosa by using the Multi-steps infrared spectroscopic method. Power-spectrum analysis is applied to improve the resolution of 2D-IR contour maps and much more details of overlapped peaks are detected. According to the results of FT-IR and second derivative spectra, the peak at 1732 cm -1 assigned to C dbnd O is stronger before flowering than that after flowering in the stem, while more C dbnd O groups are found in the top after flowering. The spectra of root change a lot in the process of flowering for the reason that many peaks shift and disappear after flowering. Seven peaks in the spectra of stem, which are assigned to different kinds of glycoside components, are distinguished by Power-spectra in the range of 900-1200 cm -1. The results provide a scientific explanation to the traditional experience that flowering consumes the pharmaceutical components in stem and the seeds absorb some nutrients of stem after flowering. In conclusion, the Multi-steps infrared spectroscopic method combined with Power-spectra is a promising method to investigate the flowering process of C. tubulosa and discriminate various parts of the herbal medicine.

  16. Multi-Step Ka/Ka Dichroic Plate with Rounded Corners for NASA's 34m Beam Waveguide Antenna

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Khayatian, Behrouz; Hoppe, Daniel; Long, Ezra

    2013-01-01

    A multi-step Ka/Ka dichroic plate Frequency Selective Surface (FSS structure) is designed, manufactured and tested for use in NASA's Deep Space Network (DSN) 34m Beam Waveguide (BWG) antennas. The proposed design allows ease of manufacturing and ability to handle the increased transmit power (reflected off the FSS) of the DSN BWG antennas from 20kW to 100 kW. The dichroic is designed using HFSS and results agree well with measured data considering the manufacturing tolerances that could be achieved on the dichroic.

  17. Multi-Step Ka/Ka Dichroic Plate with Rounded Corners for NASA's 34m Beam Waveguide Antenna

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Khayatian, Behrouz; Hoppe, Daniel; Long, Ezra

    2013-01-01

    A multi-step Ka/Ka dichroic plate Frequency Selective Surface (FSS structure) is designed, manufactured and tested for use in NASA's Deep Space Network (DSN) 34m Beam Waveguide (BWG) antennas. The proposed design allows ease of manufacturing and ability to handle the increased transmit power (reflected off the FSS) of the DSN BWG antennas from 20kW to 100 kW. The dichroic is designed using HFSS and results agree well with measured data considering the manufacturing tolerances that could be achieved on the dichroic.

  18. Investigation and comparison of analytical, numerical, and experimentally measured coupling losses for multi-step index optical fibers.

    PubMed

    Aldabaldetreku, Gotzon; Durana, Gaizka; Zubia, Joseba; Arrue, Jon; Poisel, Hans; Losada, María

    2005-05-30

    The aim of the present paper is to provide a comprehensive analysis of the coupling losses in multi-step index (MSI) fibres. Their light power acceptance properties are investigated to obtain the corresponding analytical expressions taking into account longitudinal, transverse, and angular misalignments. For this purpose, a uniform power distribution is assumed. In addition, we perform several experimental measurements and computer simulations in order to calculate the coupling losses for two different MSI polymer optical fibres (MSI-POFs). These results serve us to validate the theoretical expressions we have obtained.

  19. Rapid on-chip multi-step (bio)chemical procedures in continuous flow--manoeuvring particles through co-laminar reagent streams.

    PubMed

    Peyman, Sally A; Iles, Alexander; Pamme, Nicole

    2008-03-14

    We introduce a novel and extremely versatile microfluidic platform in which tedious multi-step biochemical processes can be performed in continuous flow within a fraction of the time required for conventional methods.

  20. GreedyMAX-type Algorithms for the Maximum Independent Set Problem

    NASA Astrophysics Data System (ADS)

    Borowiecki, Piotr; Göring, Frank

    A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.

  1. Fabrication of different pore shapes by multi-step etching technique in ion-irradiated PET membranes

    NASA Astrophysics Data System (ADS)

    Mo, D.; Liu, J. D.; Duan, J. L.; Yao, H. J.; Latif, H.; Cao, D. L.; Chen, Y. H.; Zhang, S. X.; Zhai, P. F.; Liu, J.

    2014-08-01

    A method for the fabrication of different pore shapes in polyethylene terephthalate (PET)-based track etched membranes (TEMs) is reported. A multi-step etching technique involving etchant variation and track annealing was applied to fabricate different pore shapes in PET membranes. PET foils of 12-μm thickness were irradiated with Bi ions (kinetic energy 9.5 MeV/u, fluence 106 ions/cm2) at the Heavy Ion Research Facility (HIRFL, Lanzhou). The cross-sections of fundamental pore shapes (cylinder, cone, and double cone) were analyzed. Funnel-shaped and pencil-shaped pores were obtained using a two-step etching process. Track annealing was carried out in air at 180 °C for 120 min. After track annealing, the selectivity of the etching process decreased, which resulted in isotropic etching in subsequent etching steps. Rounded cylinder and rounded cone shapes were obtained by introducing a track-annealing step in the etching process. Cup and spherical funnel-shaped pores were fabricated using a three- and four-step etching process, respectively. The described multi-step etching technique provides a controllable method to fabricate new pore shapes in TEMs. Introduction of a variety of pore shapes may improve the separation properties of TEMs and enrich the series of TEM products.

  2. Abundance and composition of indigenous bacterial communities in a multi-step biofiltration-based drinking water treatment plant.

    PubMed

    Lautenschlager, Karin; Hwang, Chiachi; Ling, Fangqiong; Liu, Wen-Tso; Boon, Nico; Köster, Oliver; Egli, Thomas; Hammes, Frederik

    2014-10-01

    Indigenous bacterial communities are essential for biofiltration processes in drinking water treatment systems. In this study, we examined the microbial community composition and abundance of three different biofilter types (rapid sand, granular activated carbon, and slow sand filters) and their respective effluents in a full-scale, multi-step treatment plant (Zürich, CH). Detailed analysis of organic carbon degradation underpinned biodegradation as the primary function of the biofilter biomass. The biomass was present in concentrations ranging between 2-5 × 10(15) cells/m(3) in all filters but was phylogenetically, enzymatically and metabolically diverse. Based on 16S rRNA gene-based 454 pyrosequencing analysis for microbial community composition, similar microbial taxa (predominantly Proteobacteria, Planctomycetes, Acidobacteria, Bacteriodetes, Nitrospira and Chloroflexi) were present in all biofilters and in their respective effluents, but the ratio of microbial taxa was different in each filter type. This change was also reflected in the cluster analysis, which revealed a change of 50-60% in microbial community composition between the different filter types. This study documents the direct influence of the filter biomass on the microbial community composition of the final drinking water, particularly when the water is distributed without post-disinfection. The results provide new insights on the complexity of indigenous bacteria colonizing drinking water systems, especially in different biofilters of a multi-step treatment plant. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. A nonlinear spatio-temporal lumping of radar rainfall for modeling multi-step-ahead inflow forecasts by data-driven techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai, Meng-Jung

    2016-04-01

    Accurate multi-step-ahead inflow forecasting during typhoon periods is extremely crucial for real-time reservoir flood control. We propose a spatio-temporal lumping of radar rainfall for modeling inflow forecasts to mitigate time-lag problems and improve forecasting accuracy. Spatial aggregation of radar cells is made based on the sub-catchment partitioning obtained from the Self-Organizing Map (SOM), and then flood forecasting is made by the Adaptive Neuro Fuzzy Inference System (ANFIS) models coupled with a 2-staged Gamma Test (2-GT) procedure that identifies the optimal non-trivial rainfall inputs. The Shihmen Reservoir in northern Taiwan is used as a case study. The results show that the proposed methods can, in general, precisely make 1- to 4-hour-ahead forecasts and the lag time between predicted and observed flood peaks could be mitigated. The constructed ANFIS models with only two fuzzy if-then rules can effectively categorize inputs into two levels (i.e. high and low) and provide an insightful view (perspective) of the rainfall-runoff process, which demonstrate their capability in modeling the complex rainfall-runoff process. In addition, the confidence level of forecasts with acceptable error can reach as high as 97% at horizon t+1 and 77% at horizon t+4, respectively, which evidently promotes model reliability and leads to better decisions on real-time reservoir operation during typhoon events.

  4. A greedy strategy for finding motifs from yes-no examples.

    PubMed

    Tateishi, E; Miyano, S

    1996-01-01

    We define a motif as an expression Z1.Z2...Zn with sets Z1, Z2,..., Zn of strings in a specified family omega called the type. This notion can capture the most of the motifs in PROSITE as well as regular pattern languages. A greedy strategy is developed for finding such motifs with ambiguity just from positive and negative examples by exploiting the probabilistic argument. This paper concentrates on describing the idea of the greedy algorithm with its underlying theory. Its experimental results on splicing sites and E. coli promoters are also presented.

  5. Effects of Stroke on Ipsilesional End-Effector Kinematics in a Multi-Step Activity of Daily Living

    PubMed Central

    Gulde, Philipp; Hughes, Charmayne Mary Lee; Hermsdörfer, Joachim

    2017-01-01

    Background: Stroke frequently impairs activities of daily living (ADL) and deteriorates the function of the contra- as well as the ipsilesional limbs. In order to analyze alterations of higher motor control unaffected by paresis or sensory loss, the kinematics of ipsilesional upper limb movements in patients with stroke has previously been analyzed during prehensile movements and simple tool use actions. By contrast, motion recording of multi-step ADL is rare and patient-control comparisons for movement kinematics are largely lacking. Especially in clinical research, objective quantification of complex externally valid tasks can improve the assessment of neurological impairments. Methods: In this preliminary study we employed three-dimensional motion recording and applied kinematic analysis in a multi-step ADL (tea-making). The trials were examined with respect to errors and sub-action structure, durations, path lengths (PLs), peak velocities, relative activity (RA) and smoothness. In order to check for specific burdens the sub-actions of the task were extracted and compared. To examine the feasibility of the approach, we determined the behavioral and kinematic metrics of the (ipsilesional) unimanual performance of seven chronic stroke patients (64a ± 11a, 3 with right/4 with left brain damage (LBD), 2 with signs of apraxia, variable severity of paresis) and compared the results with data of 14 neurologically healthy age-matched control participants (70a ± 7a). Results: T-tests revealed that while the quantity and structure of sub-actions of the task were similar. The analysis of end-effector kinematics was able to detect clear group differences in the associated parameters. Specifically, trial duration (TD) was increased (Cohen’s d = 1.77); the RA (Cohen’s d = 1.72) and the parameters of peak velocities (Cohen’s d = 1.49/1.97) were decreased in the patient group. Analysis of the task’s sub-actions repeated measures analysis of variance (rmANOVA) revealed

  6. Using multi-step proposal distribution for improved MCMC convergence in Bayesian network structure learning.

    PubMed

    Larjo, Antti; Lähdesmäki, Harri

    2015-12-01

    Bayesian networks have become popular for modeling probabilistic relationships between entities. As their structure can also be given a causal interpretation about the studied system, they can be used to learn, for example, regulatory relationships of genes or proteins in biological networks and pathways. Inference of the Bayesian network structure is complicated by the size of the model structure space, necessitating the use of optimization methods or sampling techniques, such Markov Chain Monte Carlo (MCMC) methods. However, convergence of MCMC chains is in many cases slow and can become even a harder issue as the dataset size grows. We show here how to improve convergence in the Bayesian network structure space by using an adjustable proposal distribution with the possibility to propose a wide range of steps in the structure space, and demonstrate improved network structure inference by analyzing phosphoprotein data from the human primary T cell signaling network.

  7. Teaching multi-step requesting and social communication to two children with autism spectrum disorders with three AAC options.

    PubMed

    van der Meer, Larah; Kagohara, Debora; Roche, Laura; Sutherland, Dean; Balandin, Susan; Green, Vanessa A; O'Reilly, Mark F; Lancioni, Giulio E; Marschik, Peter B; Sigafoos, Jeff

    2013-09-01

    The present study involved comparing the acquisition of multi-step requesting and social communication across three AAC options: manual signing (MS), picture exchange (PE), and speech-generating devices (SGDs). Preference for each option was also assessed. The participants were two children with autism spectrum disorders (ASD) who had previously been taught to use each option to request preferred items. Intervention was implemented in an alternating-treatments design. During baseline, participants demonstrated low levels of correct communicative responding. With intervention, both participants learned the target responses (two- and three-step requesting responses, greetings, answering questions, and social etiquette responses) to varying levels of proficiency with each communication option. One participant demonstrated a preference for using the SGD and the other preferred PE. The importance of examining preferences for using one AAC option over others is discussed.

  8. Deterministic multi-step rotation of magnetic single-domain state in Nickel nanodisks using multiferroic magnetoelastic coupling

    NASA Astrophysics Data System (ADS)

    Sohn, Hyunmin; Liang, Cheng-yen; Nowakowski, Mark E.; Hwang, Yongha; Han, Seungoh; Bokor, Jeffrey; Carman, Gregory P.; Candler, Robert N.

    2017-10-01

    We demonstrate deterministic multi-step rotation of a magnetic single-domain (SD) state in Nickel nanodisks using the multiferroic magnetoelastic effect. Ferromagnetic Nickel nanodisks are fabricated on a piezoelectric Lead Zirconate Titanate (PZT) substrate, surrounded by patterned electrodes. With the application of a voltage between opposing electrode pairs, we generate anisotropic in-plane strains that reshape the magnetic energy landscape of the Nickel disks, reorienting magnetization toward a new easy axis. By applying a series of voltages sequentially to adjacent electrode pairs, circulating in-plane anisotropic strains are applied to the Nickel disks, deterministically rotating a SD state in the Nickel disks by increments of 45°. The rotation of the SD state is numerically predicted by a fully-coupled micromagnetic/elastodynamic finite element analysis (FEA) model, and the predictions are experimentally verified with magnetic force microscopy (MFM). This experimental result will provide a new pathway to develop energy efficient magnetic manipulation techniques at the nanoscale.

  9. Multi-step excitation energy transfer engineered in genetic fusions of natural and synthetic light-harvesting proteins.

    PubMed

    Mancini, Joshua A; Kodali, Goutham; Jiang, Jianbing; Reddy, Kanumuri Ramesh; Lindsey, Jonathan S; Bryant, Donald A; Dutton, P Leslie; Moser, Christopher C

    2017-02-01

    Synthetic proteins designed and constructed from first principles with minimal reference to the sequence of any natural protein have proven robust and extraordinarily adaptable for engineering a range of functions. Here for the first time we describe the expression and genetic fusion of a natural photosynthetic light-harvesting subunit with a synthetic protein designed for light energy capture and multi-step transfer. We demonstrate excitation energy transfer from the bilin of the CpcA subunit (phycocyanin α subunit) of the cyanobacterial photosynthetic light-harvesting phycobilisome to synthetic four-helix-bundle proteins accommodating sites that specifically bind a variety of selected photoactive tetrapyrroles positioned to enhance energy transfer by relay. The examination of combinations of different bilin, chlorin and bacteriochlorin cofactors has led to identification of the preconditions for directing energy from the bilin light-harvesting antenna into synthetic protein-cofactor constructs that can be customized for light-activated chemistry in the cell.

  10. Event-triggered logical flow control for comprehensive process integration of multi-step assays on centrifugal microfluidic platforms.

    PubMed

    Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens

    2014-07-07

    The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from

  11. GreedEx: A Visualization Tool for Experimentation and Discovery Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. A.; Debdi, O.; Esteban-Sanchez, N.; Pizarro, C.

    2013-01-01

    Several years ago we presented an experimental, discovery-learning approach to the active learning of greedy algorithms. This paper presents GreedEx, a visualization tool developed to support this didactic method. The paper states the design goals of GreedEx, makes explicit the major design decisions adopted, and describes its main characteristics…

  12. The Greedy Little Boy Teacher's Manual [With Units for Levels A and B].

    ERIC Educational Resources Information Center

    Otto, Dale; George, Larry

    The Center for the Study of Migrant and Indian Education has recognized the need to develop special materials to improve the non-Indian's understanding of the differences he observes in his Indian classmates and to promote a better understanding by American Indian children of their unique cultural heritage. The Greedy Little Boy is a traditional…

  13. GreedEx: A Visualization Tool for Experimentation and Discovery Learning of Greedy Algorithms

    ERIC Educational Resources Information Center

    Velazquez-Iturbide, J. A.; Debdi, O.; Esteban-Sanchez, N.; Pizarro, C.

    2013-01-01

    Several years ago we presented an experimental, discovery-learning approach to the active learning of greedy algorithms. This paper presents GreedEx, a visualization tool developed to support this didactic method. The paper states the design goals of GreedEx, makes explicit the major design decisions adopted, and describes its main characteristics…

  14. Design of a new automated multi-step outflow test apparatus

    NASA Astrophysics Data System (ADS)

    Figueras, J.; Gribb, M. M.; McNamara, J. P.

    2006-12-01

    Modeling flow and transport in the vadose zone requires knowledge of the soil hydraulic properties. Laboratories studies involving vadose zone soils typically include use of the multistep outflow method (MSO), which can provide information about wetting and drying soil-moisture and hydraulic conductivity curves from a single test. However, manual MSO testing is time consuming and measurement errors can be easily introduced. A computer-automated system has been designed to allow convenient measurement of soil-water characteristic curves. Computer-controlled solenoid valves are used to regulate the pressure inside Tempe cells to drain soil samples, and outflow volumes are measured with a pressure transducer. The electronic components of the system are controlled using LabVIEW software. This system has been optimized for undisturbed core samples. System performance has been evaluated by comparing results from undisturbed samples subjected first to manual MSO testing and then automated testing. The automated and manual MSO tests yielded similar drying soil-water characteristic curves. These curves are further compared to in-situ measurements and those obtained using pedotransfer functions for a semi-arid watershed.

  15. Genetic model of multi-step breast carcinogenesis involving the epithelium and stroma: clues to tumour-microenvironment interactions.

    PubMed

    Kurose, K; Hoshaw-Woodard, S; Adeyinka, A; Lemeshow, S; Watson, P H; Eng, C

    2001-09-01

    Although numerous studies have reported that high frequencies of loss of heterozygosity (LOH) at various chromosomal arms have been identified in breast cancer, differential LOH in the neoplastic epithelial and surrounding stromal compartments has not been well examined. Using laser capture microdissection, which enables separation of neoplastic epithelium from surrounding stroma, we microdissected each compartment of 41 sporadic invasive adenocarcinomas of the breast. Frequent LOH was identified in both neoplastic epithelial and/or stromal compartments, ranging from 25 to 69% in the neoplastic epithelial cells, and from 17 to 61% in the surrounding stromal cells, respectively. The great majority of markers showed a higher frequency of LOH in the neoplastic epithelial compartment than in the stroma, suggesting that LOH in neoplastic epithelial cells might precede LOH in surrounding stromal cells. Furthermore, we sought to examine pair-wise associations of particular genetic alterations in either epithelial or stromal compartments. Seventeen pairs of markers showed statistically significant associations. We also propose a genetic model of multi-step carcinogenesis for the breast involving the epithelial and stromal compartments and note that genetic alterations occur in the epithelial compartments as the earlier steps followed by LOH in the stromal compartments. Our study strongly suggests that interactions between breast epithelial and stromal compartments might play a critical role in breast carcinogenesis and several genetic alterations in both epithelial and stromal compartments are required for breast tumour growth and progression.

  16. Dempster-Shafer regression for multi-step-ahead time-series prediction towards data-driven machinery prognosis

    NASA Astrophysics Data System (ADS)

    Niu, Gang; Yang, Bo-Suk

    2009-04-01

    Predicting a sequence of future values of a time series using the descriptors observed in the past can be regarded as the stand-stone of data-driven machinery prognosis. The purpose of this paper is to develop a novel data-driven machinery prognosis strategy for industry application. First, the collected time-series degradation features are reconstructed based on the theorem of Takens, among which the reconstruction parameters, delay time and embedding dimension are selected by the C-C method and the false nearest neighbor method, respectively. Next, the Dempster-Shafer regression technique is developed to perform the task of time-series prediction. Moreover, the strategy of iterated multi-step-ahead prediction is discussed to keep track with the rapid variation of time-series signals during the data monitoring process in an industrial plant. The proposed scheme is validated using condition monitoring data of a methane compressor to predict the degradation trend. Experimental results show that the proposed methods have a low error rate; hence, it can be regarded as an effective tool for data-driven machinery prognosis applications.

  17. Multi-step formation of a hemifusion diaphragm for vesicle fusion revealed by all-atom molecular dynamics simulations.

    PubMed

    Tsai, Hui-Hsu Gavin; Chang, Che-Ming; Lee, Jian-Bin

    2014-06-01

    Membrane fusion is essential for intracellular trafficking and virus infection, but the molecular mechanisms underlying the fusion process remain poorly understood. In this study, we employed all-atom molecular dynamics simulations to investigate the membrane fusion mechanism using vesicle models which were pre-bound by inter-vesicle Ca(2+)-lipid clusters to approximate Ca(2+)-catalyzed fusion. Our results show that the formation of the hemifusion diaphragm for vesicle fusion is a multi-step event. This result contrasts with the assumptions made in most continuum models. The neighboring hemifused states are separated by an energy barrier on the energy landscape. The hemifusion diaphragm is much thinner than the planar lipid bilayers. The thinning of the hemifusion diaphragm during its formation results in the opening of a fusion pore for vesicle fusion. This work provides new insights into the formation of the hemifusion diaphragm and thus increases understanding of the molecular mechanism of membrane fusion. This article is part of a Special Issue entitled: Membrane Structure and Function: Relevance in the Cell's Physiology, Pathology and Therapy. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Anisotropic multi-step etching for large-area fabrication of surface microstructures on stainless steel to control thermal radiation

    PubMed Central

    Yamada, T; Sasaki, K; Takada, A; Nomura, H; Iguchi, F; Yugami, H

    2015-01-01

    Controlling the thermal radiation spectra of materials is one of the promising ways to advance energy system efficiency. It is well known that the thermal radiation spectrum can be controlled through the introduction of periodic surface microstructures. Herein, a method for the large-area fabrication of periodic microstructures based on multi-step wet etching is described. The method consists of three main steps, i.e., resist mask fabrication via photolithography, electrochemical wet etching, and side wall protection. Using this method, high-aspect micro-holes (0.82 aspect ratio) arrayed with hexagonal symmetry were fabricated on a stainless steel substrate. The conventional wet etching process method typically provides an aspect ratio of 0.3. The optical absorption peak attributed to the fabricated micro-hole array appeared at 0.8 μm, and the peak absorbance exceeded 0.8 for the micro-holes with a 0.82 aspect ratio. While argon plasma etching in a vacuum chamber was used in the present study for the formation of the protective layer, atmospheric plasma etching should be possible and will expand the applicability of this new method for the large-area fabrication of high-aspect materials. PMID:27877770

  19. Anisotropic multi-step etching for large-area fabrication of surface microstructures on stainless steel to control thermal radiation.

    PubMed

    Shimizu, M; Yamada, T; Sasaki, K; Takada, A; Nomura, H; Iguchi, F; Yugami, H

    2015-04-01

    Controlling the thermal radiation spectra of materials is one of the promising ways to advance energy system efficiency. It is well known that the thermal radiation spectrum can be controlled through the introduction of periodic surface microstructures. Herein, a method for the large-area fabrication of periodic microstructures based on multi-step wet etching is described. The method consists of three main steps, i.e., resist mask fabrication via photolithography, electrochemical wet etching, and side wall protection. Using this method, high-aspect micro-holes (0.82 aspect ratio) arrayed with hexagonal symmetry were fabricated on a stainless steel substrate. The conventional wet etching process method typically provides an aspect ratio of 0.3. The optical absorption peak attributed to the fabricated micro-hole array appeared at 0.8 μm, and the peak absorbance exceeded 0.8 for the micro-holes with a 0.82 aspect ratio. While argon plasma etching in a vacuum chamber was used in the present study for the formation of the protective layer, atmospheric plasma etching should be possible and will expand the applicability of this new method for the large-area fabrication of high-aspect materials.

  20. Multiwavelength Observations of a Slow Raise, Multi-Step X1.6 Flare and the Associated Eruption

    NASA Astrophysics Data System (ADS)

    Yurchyshyn, V.

    2015-12-01

    Using multi-wavelength observations we studied a slow rise, multi-step X1.6 flare that began on November 7, 2014 as a localized eruption of core fields inside a δ-sunspot and later engulfed the entire active region. This flare event was associated with formation of two systems of post eruption arcades (PEAs) and several J-shaped flare ribbons showing extremely fine details, irreversible changes in the photospheric magnetic fields, and it was accompanied by a fast and wide coronal mass ejection. Data from the Solar Dynamics Observatory, IRIS spacecraft along with the ground based data from the New Solar Telescope (NST) present evidence that i) the flare and the eruption were directly triggered by a flux emergence that occurred inside a δ--sunspot at the boundary between two umbrae; ii) this event represented an example of an in-situ formation of an unstable flux rope observed only in hot AIA channels (131 and 94Å) and LASCO C2 coronagraph images; iii) the global PEA system spanned the entire AR and was due to global scale reconnection occurring at heights of about one solar radii, indicating on the global spatial and temporal scale of the eruption.

  1. Trace Determination of Gadolinium in Biomedical Samples by Diode Laser-Based Multi-Step Resonance Ionization Mass Spectrometry

    SciTech Connect

    Blaum, K; Geppert, C H.; Schreiber, W G.; Hengstler, J; Muller, P; Nortershauser, W; Wendt, Klaus; Bushaw, Bruce A. )

    2002-01-01

    We report on the application of high-resolution multi-step resonance ionization mass spectrometry (RIMS) to the trace determination of the rare earth element gadolinium. Utilizing three-step resonant excitation into an autoionizing level, we attain both isobaric and isotopic selectivity of > 107. An overall detection efficiency of -10-7 and an isotope specific detection limit of 1.5x109 atoms have been demonstrated. When targeting the major isotope 158Gd, this corresponds to a total Gd detection limit of 1.6 pg. Additionally, linear response has been demonstrated over a dynamic range of six orders of magnitude. The method has been used to determine the GD-content in various normal and tumor tissue samples, taken from a laboratory mouse shortly after injection of Gd-DTPA, which is used as a contrast agent for magnetic resonance imaging (MRI). The RIMS results show Gd concentrations that vary by more than two orders of magnitude depending on the tissue type. This variability is similar to that observed in MRI scans that depict Gd-DTPA content in the mouse prior to dissection, and illustrates the potential for quantitative trace analysis in microsamples of biomedical materials.

  2. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  3. Gadolinium trace determination in biomedical samples by diode-laser-based multi-step resonance ionization mass spectrometry

    NASA Astrophysics Data System (ADS)

    Geppert, Ch.; Blaum, K.; Diel, S.; Müller, P.; Schreiber, W. G.; Wendt, K.

    2001-08-01

    Diode laser based multi-step resonance ionization mass spectrometry (RIMS), which has been developed primarily for ultra trace analysis of long lived radioactive isotopes has been adapted for the application to elements within the sequence of the rare earths. First investigations concern Gd isotopes. Here high suppression of isobars, as provided by RIMS, is mandatory. Using a three step resonant excitation scheme into an autoionizing state, which has been the subject of preparatory spectroscopic investigations, high efficiency of >1×10-6 and good isobaric selectivity >107 was realized. Additionally the linearity of the method has been demonstrated over six orders of magnitude. Avoiding contaminations from the Titanium-carrier foil resulted in a suppression of background of more than one order of magnitude and a correspondingly low detection limit of 4×109 atoms, equivalent to lpg of Gd. The technique has been applied for trace determination of the Gd-content in animal tissue. Bio-medical micro samples were analyzed shortly after Gd-chelat, which is used as the primary contrast medium for magnetic resonance imaging (MRI) in biomedical investigations, has been injected. Correlated in-vivo magnetic resonance images have been taken. The RIMS measurements show high reproducibility as a well as good precision, and contribute to new insight into the distribution and kinetics of Gd within different healthy and cancerous tissues.

  4. A multi-step reaction model for ignition of fully-dense Al-CuO nanocomposite powders

    NASA Astrophysics Data System (ADS)

    Stamatis, D.; Ermoline, A.; Dreizin, E. L.

    2012-12-01

    A multi-step reaction model is developed to describe heterogeneous processes occurring upon heating of an Al-CuO nanocomposite material prepared by arrested reactive milling. The reaction model couples a previously derived Cabrera-Mott oxidation mechanism describing initial, low temperature processes and an aluminium oxidation model including formation of different alumina polymorphs at increased film thicknesses and higher temperatures. The reaction model is tuned using traces measured by differential scanning calorimetry. Ignition is studied for thin powder layers and individual particles using respectively the heated filament (heating rates of 103-104 K s-1) and laser ignition (heating rate ∼106 K s-1) experiments. The developed heterogeneous reaction model predicts a sharp temperature increase, which can be associated with ignition when the laser power approaches the experimental ignition threshold. In experiments, particles ignited by the laser beam are observed to explode, indicating a substantial gas release accompanying ignition. For the heated filament experiments, the model predicts exothermic reactions at the temperatures, at which ignition is observed experimentally; however, strong thermal contact between the metal filament and powder prevents the model from predicting the thermal runaway. It is suggested that oxygen gas release from decomposing CuO, as observed from particles exploding upon ignition in the laser beam, disrupts the thermal contact of the powder and filament; this phenomenon must be included in the filament ignition model to enable prediction of the temperature runaway.

  5. Segmenting the Femoral Head and Acetabulum in the Hip Joint Automatically Using a Multi-Step Scheme

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Cheng, Yuanzhi; Fu, Yili; Zhou, Shengjun; Tamura, Shinichi

    We describe a multi-step approach for automatic segmentation of the femoral head and the acetabulum in the hip joint from three dimensional (3D) CT images. Our segmentation method consists of the following steps: 1) construction of the valley-emphasized image by subtracting valleys from the original images; 2) initial segmentation of the bone regions by using conventional techniques including the initial threshold and binary morphological operations from the valley-emphasized image; 3) further segmentation of the bone regions by using the iterative adaptive classification with the initial segmentation result; 4) detection of the rough bone boundaries based on the segmented bone regions; 5) 3D reconstruction of the bone surface using the rough bone boundaries obtained in step 4) by a network of triangles; 6) correction of all vertices of the 3D bone surface based on the normal direction of vertices; 7) adjustment of the bone surface based on the corrected vertices. We evaluated our approach on 35 CT patient data sets. Our experimental results show that our segmentation algorithm is more accurate and robust against noise than other conventional approaches for automatic segmentation of the femoral head and the acetabulum. Average root-mean-square (RMS) distance from manual reference segmentations created by experienced users was approximately 0.68mm (in-plane resolution of the CT data).

  6. The independent use of self-instructions for the acquisition of untrained multi-step tasks for individuals with an intellectual disability: A review of the literature.

    PubMed

    Smith, Katie A; Shepley, Sally B; Alexander, Jennifer L; Ayres, Kevin M

    2015-05-01

    Systematic instruction on multi-step tasks (e.g., cooking, vocational skills, personal hygiene) is common for individuals with an intellectual disability. Unfortunately, when individuals with disabilities turn 22-years-old, they no longer receive services in the public school system in most states and systematic instruction often ends (Bouck, 2012). Rather than focusing instructional time on teacher-delivered training on the acquisition of specific multi-step tasks, teaching individuals with disabilities a pivotal skill, such as using self-instructional strategies, may be a more meaningful use of time. By learning self-instruction strategies that focus on generalization, individuals with disabilities can continue acquiring novel multi-step tasks in post-secondary settings and remediate skills that are lost over time. This review synthesizes the past 30 years of research related to generalized self-instruction to learn multi-step tasks, provides information about the types of self-instructional materials used, the ways in which participants received training to use them, and concludes with implications for practitioners and recommendations for future research. Published by Elsevier Ltd.

  7. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  8. A new greedy randomised adaptive search procedure for Multiple Sequence Alignment.

    PubMed

    Layeb, Abdesslem; Selmane, Marwa; Elhoucine, Maroua Bencheikh

    2013-01-01

    The Multiple Sequence Alignment (MSA) is one of the most challenging tasks in bioinformatics. It consists of aligning several sequences to show the fundamental relationship and the common characteristics between a set of protein or nucleic sequences; this problem has been shown to be NP-complete if the number of sequences is >2. In this paper, a new incomplete algorithm based on a Greedy Randomised Adaptive Search Procedure (GRASP) is presented to deal with the MSA problem. The first GRASP's phase is a new greedy algorithm based on the application of a new random progressive method and a hybrid global/local algorithm. The second phase is an adaptive refinement method based on consensus alignment. The obtained results are very encouraging and show the feasibility and effectiveness of the proposed approach.

  9. Greedy heuristic algorithm for solving series of eee components classification problems*

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, A. L.; Antamoshkin, A. N.; Fedosov, V. V.

    2016-04-01

    Algorithms based on using the agglomerative greedy heuristics demonstrate precise and stable results for clustering problems based on k- means and p-median models. Such algorithms are successfully implemented in the processes of production of specialized EEE components for using in space systems which include testing each EEE device and detection of homogeneous production batches of the EEE components based on results of the tests using p-median models. In this paper, authors propose a new version of the genetic algorithm with the greedy agglomerative heuristic which allows solving series of problems. Such algorithm is useful for solving the k-means and p-median clustering problems when the number of clusters is unknown. Computational experiments on real data show that the preciseness of the result decreases insignificantly in comparison with the initial genetic algorithm for solving a single problem.

  10. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy

  11. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-04-17

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos(®) CELLEX(®) fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX(®) system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX(®) allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  12. GreedyPlus: An Algorithm for the Alignment of Interface Interaction Networks.

    PubMed

    Law, Brian; Bader, Gary D

    2015-07-13

    The increasing ease and accuracy of protein-protein interaction detection has resulted in the ability to map the interactomes of multiple species. We now have an opportunity to compare species to better understand how interactomes evolve. As DNA and protein sequence alignment algorithms were required for comparative genomics, network alignment algorithms are required for comparative interactomics. A number of network alignment methods have been developed for protein-protein interaction networks, where proteins are represented as vertices linked by edges if they interact. Recently, protein interactions have been mapped at the level of amino acid positions, which can be represented as an interface-interaction network (IIN), where vertices represent binding sites, such as protein domains and short sequence motifs. However, current algorithms are not designed to align these networks and generally fail to do so in practice. We present a greedy algorithm, GreedyPlus, for IIN alignment, combining data from diverse sources, including network, protein and binding site properties, to identify putative orthologous relationships between interfaces in available worm and yeast data. GreedyPlus is fast and simple, allowing for easy customization of behaviour, yet still capable of generating biologically meaningful network alignments.

  13. Greedy and Linear Ensembles of Machine Learning Methods Outperform Single Approaches for QSPR Regression Problems.

    PubMed

    Kew, William; Mitchell, John B O

    2015-09-01

    The application of Machine Learning to cheminformatics is a large and active field of research, but there exist few papers which discuss whether ensembles of different Machine Learning methods can improve upon the performance of their component methodologies. Here we investigated a variety of methods, including kernel-based, tree, linear, neural networks, and both greedy and linear ensemble methods. These were all tested against a standardised methodology for regression with data relevant to the pharmaceutical development process. This investigation focused on QSPR problems within drug-like chemical space. We aimed to investigate which methods perform best, and how the 'wisdom of crowds' principle can be applied to ensemble predictors. It was found that no single method performs best for all problems, but that a dynamic, well-structured ensemble predictor would perform very well across the board, usually providing an improvement in performance over the best single method. Its use of weighting factors allows the greedy ensemble to acquire a bigger contribution from the better performing models, and this helps the greedy ensemble generally to outperform the simpler linear ensemble. Choice of data preprocessing methodology was found to be crucial to performance of each method too. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Estimation of Effective Soil Hydraulic Properties Using Data From High Resolution Gamma Densiometry and Tensiometers of Multi-Step-Outflow Experiments

    NASA Astrophysics Data System (ADS)

    Werisch, Stefan; Lennartz, Franz; Bieberle, Andre

    2013-04-01

    Dynamic Multi Step Outflow (MSO) experiments serve for the estimation of the parameters from soil hydraulic functions like e.g. the Mualem van Genuchten model. The soil hydraulic parameters are derived from outflow records and corresponding matric potential measurements from commonly a single tensiometer using inverse modeling techniques. We modified the experimental set up allowing for simultaneous measurements of the matric potential with three tensiometers and the water content using a high-resolution gamma-ray densiometry measurement system (Bieberle et al., 2007, Hampel et al., 2007). Different combinations of the measured time series were used for the estimation of effective soil hydraulic properties, representing different degrees of information of the "hydraulic reality" of the sample. The inverse modeling task was solved with the multimethod search algorithm AMALGAM (Vrugt et al., 2007) in combination with the Hydrus1D model (Šimúnek et al., 2008). Subsequently, the resulting effective soil hydraulic parameters allow the simulation of the MSO experiment and the comparison of model results with observations. The results show that the information of a single tensiometer together with the outflow record result in a set of effective soil hydraulic parameters producing an overall good agreement between the simulation and the observation for the location of the tensiometer. Significantly deviating results are obtained for the other tensiometer positions using this parameter set. Inclusion of more information, such as additional matric potential measurements with the according water contents within the optimization procedure lead to different, more representative hydraulic parameters which improved the overall agreement significantly. These findings indicate that more information about the soil hydraulic state variables in space and time are necessary to obtain effective soil hydraulic properties of soil core samples. Bieberle, A., Kronenberg, J., Schleicher, E

  15. Rapid determination and chemical change tracking of benzoyl peroxide in wheat flour by multi-step IR macro-fingerprinting

    NASA Astrophysics Data System (ADS)

    Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Sun, Su-Qin; Gu, Dong-Chen; He, Helen; Xu, Chang-Hua; Wang, Xi-Chang

    2016-02-01

    BPO is often added to wheat flour as flour improver, but its excessive use and edibility are receiving increasing concern. A multi-step IR macro-fingerprinting was employed to identify BPO in wheat flour and unveil its changes during storage. BPO contained in wheat flour (< 3.0 mg/kg) was difficult to be identified by infrared spectra with correlation coefficients between wheat flour and wheat flour samples contained BPO all close to 0.98. By applying second derivative spectroscopy, obvious differences among wheat flour and wheat flour contained BPO before and after storage in the range of 1500-1400 cm- 1 were disclosed. The peak of 1450 cm- 1 which belonged to BPO was blue shifted to 1453 cm- 1 (1455) which belonged to benzoic acid after one week of storage, indicating that BPO changed into benzoic acid after storage. Moreover, when using two-dimensional correlation infrared spectroscopy (2DCOS-IR) to track changes of BPO in wheat flour (0.05 mg/g) within one week, intensities of auto-peaks at 1781 cm- 1 and 669 cm- 1 which belonged to BPO and benzoic acid, respectively, were changing inversely, indicating that BPO was decomposed into benzoic acid. Moreover, another autopeak at 1767 cm- 1 which does not belong to benzoic acid was also rising simultaneously. By heating perturbation treatment of BPO in wheat flour based on 2DCOS-IR and spectral subtraction analysis, it was found that BPO in wheat flour not only decomposed into benzoic acid and benzoate, but also produced other deleterious substances, e.g., benzene. This study offers a promising method with minimum pretreatment and time-saving to identify BPO in wheat flour and its chemical products during storage in a holistic manner.

  16. Rapid determination and chemical change tracking of benzoyl peroxide in wheat flour by multi-step IR macro-fingerprinting.

    PubMed

    Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Sun, Su-Qin; Gu, Dong-Chen; He, Helen; Xu, Chang-Hua; Wang, Xi-Chang

    2016-02-05

    BPO is often added to wheat flour as flour improver, but its excessive use and edibility are receiving increasing concern. A multi-step IR macro-fingerprinting was employed to identify BPO in wheat flour and unveil its changes during storage. BPO contained in wheat flour (<3.0 mg/kg) was difficult to be identified by infrared spectra with correlation coefficients between wheat flour and wheat flour samples contained BPO all close to 0.98. By applying second derivative spectroscopy, obvious differences among wheat flour and wheat flour contained BPO before and after storage in the range of 1500-1400 cm(-1) were disclosed. The peak of 1450 cm(-1) which belonged to BPO was blue shifted to 1453 cm(-1) (1455) which belonged to benzoic acid after one week of storage, indicating that BPO changed into benzoic acid after storage. Moreover, when using two-dimensional correlation infrared spectroscopy (2DCOS-IR) to track changes of BPO in wheat flour (0.05 mg/g) within one week, intensities of auto-peaks at 1781 cm(-1) and 669 cm(-1) which belonged to BPO and benzoic acid, respectively, were changing inversely, indicating that BPO was decomposed into benzoic acid. Moreover, another autopeak at 1767 cm(-1) which does not belong to benzoic acid was also rising simultaneously. By heating perturbation treatment of BPO in wheat flour based on 2DCOS-IR and spectral subtraction analysis, it was found that BPO in wheat flour not only decomposed into benzoic acid and benzoate, but also produced other deleterious substances, e.g., benzene. This study offers a promising method with minimum pretreatment and time-saving to identify BPO in wheat flour and its chemical products during storage in a holistic manner.

  17. Epigenetic Genes and Emotional Reactivity to Daily Life Events: A Multi-Step Gene-Environment Interaction Study

    PubMed Central

    Pishva, Ehsan; Drukker, Marjan; Viechtbauer, Wolfgang; Decoster, Jeroen; Collip, Dina; van Winkel, Ruud; Wichers, Marieke; Jacobs, Nele; Thiery, Evert; Derom, Catherine; Geschwind, Nicole; van den Hove, Daniel; Lataster, Tineke; Myin-Germeys, Inez; van Os, Jim

    2014-01-01

    Recent human and animal studies suggest that epigenetic mechanisms mediate the impact of environment on development of mental disorders. Therefore, we hypothesized that polymorphisms in epigenetic-regulatory genes impact stress-induced emotional changes. A multi-step, multi-sample gene-environment interaction analysis was conducted to test whether 31 single nucleotide polymorphisms (SNPs) in epigenetic-regulatory genes, i.e. three DNA methyltransferase genes DNMT1, DNMT3A, DNMT3B, and methylenetetrahydrofolate reductase (MTHFR), moderate emotional responses to stressful and pleasant stimuli in daily life as measured by Experience Sampling Methodology (ESM). In the first step, main and interactive effects were tested in a sample of 112 healthy individuals. Significant associations in this discovery sample were then investigated in a population-based sample of 434 individuals for replication. SNPs showing significant effects in both the discovery and replication samples were subsequently tested in three other samples of: (i) 85 unaffected siblings of patients with psychosis, (ii) 110 patients with psychotic disorders, and iii) 126 patients with a history of major depressive disorder. Multilevel linear regression analyses showed no significant association between SNPs and negative affect or positive affect. No SNPs moderated the effect of pleasant stimuli on positive affect. Three SNPs of DNMT3A (rs11683424, rs1465764, rs1465825) and 1 SNP of MTHFR (rs1801131) moderated the effect of stressful events on negative affect. Only rs11683424 of DNMT3A showed consistent directions of effect in the majority of the 5 samples. These data provide the first evidence that emotional responses to daily life stressors may be moderated by genetic variation in the genes involved in the epigenetic machinery. PMID:24967710

  18. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    NASA Astrophysics Data System (ADS)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  19. Epigenetic genes and emotional reactivity to daily life events: a multi-step gene-environment interaction study.

    PubMed

    Pishva, Ehsan; Drukker, Marjan; Viechtbauer, Wolfgang; Decoster, Jeroen; Collip, Dina; van Winkel, Ruud; Wichers, Marieke; Jacobs, Nele; Thiery, Evert; Derom, Catherine; Geschwind, Nicole; van den Hove, Daniel; Lataster, Tineke; Myin-Germeys, Inez; van Os, Jim; Rutten, Bart P F; Kenis, Gunter

    2014-01-01

    Recent human and animal studies suggest that epigenetic mechanisms mediate the impact of environment on development of mental disorders. Therefore, we hypothesized that polymorphisms in epigenetic-regulatory genes impact stress-induced emotional changes. A multi-step, multi-sample gene-environment interaction analysis was conducted to test whether 31 single nucleotide polymorphisms (SNPs) in epigenetic-regulatory genes, i.e. three DNA methyltransferase genes DNMT1, DNMT3A, DNMT3B, and methylenetetrahydrofolate reductase (MTHFR), moderate emotional responses to stressful and pleasant stimuli in daily life as measured by Experience Sampling Methodology (ESM). In the first step, main and interactive effects were tested in a sample of 112 healthy individuals. Significant associations in this discovery sample were then investigated in a population-based sample of 434 individuals for replication. SNPs showing significant effects in both the discovery and replication samples were subsequently tested in three other samples of: (i) 85 unaffected siblings of patients with psychosis, (ii) 110 patients with psychotic disorders, and iii) 126 patients with a history of major depressive disorder. Multilevel linear regression analyses showed no significant association between SNPs and negative affect or positive affect. No SNPs moderated the effect of pleasant stimuli on positive affect. Three SNPs of DNMT3A (rs11683424, rs1465764, rs1465825) and 1 SNP of MTHFR (rs1801131) moderated the effect of stressful events on negative affect. Only rs11683424 of DNMT3A showed consistent directions of effect in the majority of the 5 samples. These data provide the first evidence that emotional responses to daily life stressors may be moderated by genetic variation in the genes involved in the epigenetic machinery.

  20. Multi-step Monte Carlo calculations applied to nuclear reactor instrumentation - source definition and renormalization to physical values

    SciTech Connect

    Radulovic, Vladimir; Barbot, Loic; Fourmentel, Damien; Villard, Jean-Francois; Snoj, Luka; Zerovnik, Gasper; Trkov, Andrej

    2015-07-01

    Significant efforts have been made over the last few years in the French Alternative Energies and Atomic Energy Commission (CEA) to adopt multi-step Monte Carlo calculation schemes in the investigation and interpretation of the response of nuclear reactor instrumentation detectors (e.g. miniature ionization chambers - MICs and self-powered neutron or gamma detectors - SPNDs and SPGDs). The first step consists of the calculation of the primary data, i.e. evaluation of the neutron and gamma flux levels and spectra in the environment where the detector is located, using a computational model of the complete nuclear reactor core and its surroundings. These data are subsequently used to define sources for the following calculation steps, in which only a model of the detector under investigation is used. This approach enables calculations with satisfactory statistical uncertainties (of the order of a few %) within regions which are very small in size (the typical volume of which is of the order of 1 mm{sup 3}). The main drawback of a calculation scheme as described above is that perturbation effects on the radiation conditions caused by the detectors themselves are not taken into account. Depending on the detector, the nuclear reactor and the irradiation position, the perturbation in the neutron flux as primary data may reach 10 to 20%. A further issue is whether the model used in the second step calculations yields physically representative results. This is generally not the case, as significant deviations may arise, depending on the source definition. In particular, as presented in the paper, the injudicious use of special options aimed at increasing the computation efficiency (e.g. reflective boundary conditions) may introduce unphysical bias in the calculated flux levels and distortions in the spectral shapes. This paper presents examples of the issues described above related to a case study on the interpretation of the signal from different types of SPNDs, which

  1. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts

  2. Robust Nonlinear Regression: A Greedy Approach Employing Kernels With Application to Image Denoising

    NASA Astrophysics Data System (ADS)

    Papageorgiou, George; Bouboulis, Pantelis; Theodoridis, Sergios

    2017-08-01

    We consider the task of robust non-linear regression in the presence of both inlier noise and outliers. Assuming that the unknown non-linear function belongs to a Reproducing Kernel Hilbert Space (RKHS), our goal is to estimate the set of the associated unknown parameters. Due to the presence of outliers, common techniques such as the Kernel Ridge Regression (KRR) or the Support Vector Regression (SVR) turn out to be inadequate. Instead, we employ sparse modeling arguments to explicitly model and estimate the outliers, adopting a greedy approach. The proposed robust scheme, i.e., Kernel Greedy Algorithm for Robust Denoising (KGARD), is inspired by the classical Orthogonal Matching Pursuit (OMP) algorithm. Specifically, the proposed method alternates between a KRR task and an OMP-like selection step. Theoretical results concerning the identification of the outliers are provided. Moreover, KGARD is compared against other cutting edge methods, where its performance is evaluated via a set of experiments with various types of noise. Finally, the proposed robust estimation framework is applied to the task of image denoising, and its enhanced performance in the presence of outliers is demonstrated.

  3. Incremental approach for radial basis functions mesh deformation with greedy algorithm

    NASA Astrophysics Data System (ADS)

    Selim, Mohamed M.; Koomullil, Roy P.; Shehata, Ahmed S.

    2017-07-01

    Mesh Deformation is an important element of any fluid-structure interaction simulation. In this article, a new methodology is presented for the deformation of volume meshes using incremental radial basis function (RBF) based interpolation. A greedy algorithm is used to select a small subset of the surface nodes iteratively. Two incremental approaches are introduced to solve the RBF system of equations: 1) block matrix inversion based approach and 2) modified LU decomposition approach. The use of incremental approach decreased the computational complexity of solving the system of equations within each greedy algorithm's iteration from O (n3) to O (n2). Results are presented from an accuracy study using specified deformations on a 2D surface. Mesh deformations for bending and twisting of a 3D rectangular supercritical wing have been demonstrated. Outcomes showed the incremental approaches reduce the CPU time up to 67% as compared to a traditional RBF matrix solver. Finally, the proposed mesh deformation approach was integrated within a fluid-structure interaction solver for investigating a flow induced cantilever beam vibration.

  4. Greedy rule generation from discrete data and its use in neural network rule extraction.

    PubMed

    Odajima, Koichi; Hayashi, Yoichi; Tianxia, Gong; Setiono, Rudy

    2008-09-01

    This paper proposes a GRG (Greedy Rule Generation) algorithm, a new method for generating classification rules from a data set with discrete attributes. The algorithm is "greedy" in the sense that at every iteration, it searches for the best rule to generate. The criteria for the best rule include the number of samples and the size of subspaces that it covers, as well as the number of attributes in the rule. This method is employed for extracting rules from neural networks that have been trained and pruned for solving classification problems. The classification rules are extracted from the neural networks using the standard decompositional approach. Neural networks with one hidden layer are trained and the proposed GRG algorithm is applied to their discretized hidden unit activation values. Our experimental results show that neural network rule extraction with the GRG method produces rule sets that are accurate and concise. Application of GRG directly on three medical data sets with discrete attributes also demonstrates its effectiveness for rule generation.

  5. Greedy Set Cover Field Selection for Multi-object Spectroscopy in C++ MPI

    NASA Astrophysics Data System (ADS)

    Stenborg, T. N.

    2015-09-01

    Multi-object spectrographs allow efficient observation of clustered targets. Observational programs of many targets not encompassed within a telescope's field of view, however, require multiple pointings. Here, a greedy set cover algorithmic approach to efficient field selection in such a scenario is examined. The goal of this approach is not to minimize the total number of pointings needed to cover a given target set, but rather maximize the observational return for a restricted number of pointings. Telescope field of view and maximum targets per field are input parameters, allowing algorithm application to observation planning for the current range of active multi-object spectrographs (e.g. the 2dF/AAOmega, Fiber Large Array Multi Element Spectrograph, Fiber Multi-Object Spectrograph, Hectochelle, Hectospec and Hydra systems), and for any future systems. A parallel version of the algorithm is implemented with the message passing interface, facilitating execution on both shared and distributed memory systems.

  6. A Bi-objective Model Inspired Greedy Algorithm for Test Suite Minimization

    NASA Astrophysics Data System (ADS)

    Parsa, Saeed; Khalilian, Alireza

    Regression testing is a critical activity which occurs during the maintenance stage of the software lifecycle. However, it requires large amounts of test cases to assure the attainment of a certain degree of quality. As a result, test suite sizes may grow significantly. To address this issue, Test Suite Reduction techniques have been proposed. However, suite size reduction may lead to significant loss of fault detection efficacy. To deal with this problem, a greedy algorithm is presented in this paper. This algorithm attempts to select a test case which satisfies the maximum number of testing requirements while having minimum overlap in requirements coverage with other test cases. In order to evaluate the proposed algorithm, experiments have been conducted on the Siemens suite and the Space program. The results demonstrate the effectiveness of the proposed algorithm by retaining the fault detection capability of the suites while achieving significant suite size reduction.

  7. Automatic Characterization of Cross-section Coated Particle Nuclear Fuel using Greedy Coupled Bayesian Snakes

    SciTech Connect

    Price, Jeffery R; Aykac, Deniz; Hunn, John D; Kercher, Andrew K

    2007-01-01

    We describe new image analysis developments in support of the U.S. Department of Energy's (DOE) Advanced Gas Reactor (AGR) Fuel Development and Qualification Program. We previously reported a non-iterative, Bayesian approach for locating the boundaries of different particle layers in cross-sectional imagery. That method, however, had to be initialized by manual preprocessing where a user must select two points in each image, one indicating the particle center and the other indicating the first layer interface. Here, we describe a technique designed to eliminate the manual preprocessing and provide full automation. With a low resolution image, we use 'EdgeFlow' to approximate the layer boundaries with circular templates. Multiple snakes are initialized to these circles and deformed using a greedy Bayesian strategy that incorporates coupling terms as well as a priori information on the layer thicknesses and relative contrast. We show results indicating the effectiveness of the proposed method.

  8. GRISOTTO: A greedy approach to improve combinatorial algorithms for motif discovery with prior knowledge

    PubMed Central

    2011-01-01

    Background Position-specific priors (PSP) have been used with success to boost EM and Gibbs sampler-based motif discovery algorithms. PSP information has been computed from different sources, including orthologous conservation, DNA duplex stability, and nucleosome positioning. The use of prior information has not yet been used in the context of combinatorial algorithms. Moreover, priors have been used only independently, and the gain of combining priors from different sources has not yet been studied. Results We extend RISOTTO, a combinatorial algorithm for motif discovery, by post-processing its output with a greedy procedure that uses prior information. PSP's from different sources are combined into a scoring criterion that guides the greedy search procedure. The resulting method, called GRISOTTO, was evaluated over 156 yeast TF ChIP-chip sequence-sets commonly used to benchmark prior-based motif discovery algorithms. Results show that GRISOTTO is at least as accurate as other twelve state-of-the-art approaches for the same task, even without combining priors. Furthermore, by considering combined priors, GRISOTTO is considerably more accurate than the state-of-the-art approaches for the same task. We also show that PSP's improve GRISOTTO ability to retrieve motifs from mouse ChiP-seq data, indicating that the proposed algorithm can be applied to data from a different technology and for a higher eukaryote. Conclusions The conclusions of this work are twofold. First, post-processing the output of combinatorial algorithms by incorporating prior information leads to a very efficient and effective motif discovery method. Second, combining priors from different sources is even more beneficial than considering them separately. PMID:21513505

  9. Carrier transport properties of MoS2 field-effect transistors produced by multi-step chemical vapor deposition method

    NASA Astrophysics Data System (ADS)

    Heo, S.; Hayakawa, R.; Wakayama, Y.

    2017-01-01

    We report the transistor properties of MoS2 thin films formed with a multi-step chemical vapor deposition (CVD) method. The established multi-step CVD technique has four steps: MoO3 thermal evaporation, annealing for MoO3 crystallization, sulfurization, and post-annealing. We found that the MoS2 transistor properties were greatly affected by the post-annealing temperature (TPA). The films worked as ambipolar transistors below TPA = 1000 °C. Meanwhile, the transistor operation transited from ambipolar to n-type transport at a TPA of 1000 °C. X-ray photoelectron spectroscopy measurements revealed that the films annealed below 1000 °C had sulfur-rich compositions (S/Mo > 2). The excess S atoms were reduced by elevating the annealing temperature to produce an almost stoichiometric composition (S/Mo = 2) at 1000 °C. These results indicate that excess sulfurs are responsible for the ambipolar operation by acting as acceptors that generate holes. Moreover, the high-temperature annealing at 1000 °C had another distinct effect, i.e., it improved the crystallinity of the MoS2 films. The electron mobility consequently reached 0.20 ± 0 .12 cm2/V s.

  10. Application of the quality by design approach to the drug substance manufacturing process of an Fc fusion protein: towards a global multi-step design space.

    PubMed

    Eon-duval, Alex; Valax, Pascal; Solacroup, Thomas; Broly, Hervé; Gleixner, Ralf; Strat, Claire L E; Sutter, James

    2012-10-01

    The article describes how Quality by Design principles can be applied to the drug substance manufacturing process of an Fc fusion protein. First, the quality attributes of the product were evaluated for their potential impact on safety and efficacy using risk management tools. Similarly, process parameters that have a potential impact on critical quality attributes (CQAs) were also identified through a risk assessment. Critical process parameters were then evaluated for their impact on CQAs, individually and in interaction with each other, using multivariate design of experiment techniques during the process characterisation phase. The global multi-step Design Space, defining operational limits for the entire drug substance manufacturing process so as to ensure that the drug substance quality targets are met, was devised using predictive statistical models developed during the characterisation study. The validity of the global multi-step Design Space was then confirmed by performing the entire process, from cell bank thawing to final drug substance, at its limits during the robustness study: the quality of the final drug substance produced under different conditions was verified against predefined targets. An adaptive strategy was devised whereby the Design Space can be adjusted to the quality of the input material to ensure reliable drug substance quality. Finally, all the data obtained during the process described above, together with data generated during additional validation studies as well as manufacturing data, were used to define the control strategy for the drug substance manufacturing process using a risk assessment methodology.

  11. Long-term memory-based control of attention in multi-step tasks requires working memory: evidence from domain-specific interference.

    PubMed

    Foerster, Rebecca M; Carbone, Elena; Schneider, Werner X

    2014-01-01

    Evidence for long-term memory (LTM)-based control of attention has been found during the execution of highly practiced multi-step tasks. However, does LTM directly control for attention or are working memory (WM) processes involved? In the present study, this question was investigated with a dual-task paradigm. Participants executed either a highly practiced visuospatial sensorimotor task (speed stacking) or a verbal task (high-speed poem reciting), while maintaining visuospatial or verbal information in WM. Results revealed unidirectional and domain-specific interference. Neither speed stacking nor high-speed poem reciting was influenced by WM retention. Stacking disrupted the retention of visuospatial locations, but did not modify memory performance of verbal material (letters). Reciting reduced the retention of verbal material substantially whereas it affected the memory performance of visuospatial locations to a smaller degree. We suggest that the selection of task-relevant information from LTM for the execution of overlearned multi-step tasks recruits domain-specific WM.

  12. Metabolomic quality control of commercial Asian ginseng, and cultivated and wild American ginseng using (1)H NMR and multi-step PCA.

    PubMed

    Zhao, Huiying; Xu, Jin; Ghebrezadik, Helen; Hylands, Peter J

    2015-10-10

    Ginseng, mainly Asian ginseng and American ginseng, is the most widely consumed herbal product in the world . However, the existing quality control method is not adequate: adulteration is often seen in the market. In this study, 31 batches of ginseng from Chinese stores were analyzed using (1)H NMR metabolite profiles together with multi-step principal component analysis. The most abundant metabolites, sugars, were excluded from the NMR spectra after the first principal component analysis, in order to reveal differences contributed by less abundant metabolites. For the first time, robust, distinctive and representative differences of Asian ginseng from American ginseng were found and the key metabolites responsible were identified as sucrose, glucose, arginine, choline, and 2-oxoglutarate and malate. Differences between wild and cultivated ginseng were identified as ginsenosides. A substitute cultivated American ginseng was noticed. These results demonstrated that the combination of (1)H NMR and PCA is effective in quality control of ginseng.

  13. Ultra-fast formation control of high-order discrete-time multi-agent systems based on multi-step predictive mechanism.

    PubMed

    Zhang, Wenle; Liu, Jianchang; Wang, Honghai

    2015-09-01

    This paper deals with the ultra-fast formation control problem of high-order discrete-time multi-agent systems. Using the local neighbor-error knowledge, a novel ultra-fast protocol with multi-step predictive information and self-feedback term is proposed. The asymptotic convergence factor is improved by a power of q+1 compared to the routine protocol. To some extent, the ultra-fast algorithm overcomes the influence of communication topology to the convergence speed. Furthermore, some sufficient conditions are given herein. The ones decouple the design of the synchronizing gains from the detailed graph properties, and explicitly reveal how the agent dynamic and the communication graph jointly affect the ultra-fast formationability. Finally, some simulations are worked out to illustrate the effectiveness of our theoretical results.

  14. Multi-step resistive switching behavior of Li-doped ZnO resistance random access memory device controlled by compliance current

    SciTech Connect

    Lin, Chun-Cheng; Tang, Jian-Fu; Su, Hsiu-Hsien; Hong, Cheng-Shong; Huang, Chih-Yu; Chu, Sheng-Yuan

    2016-06-28

    The multi-step resistive switching (RS) behavior of a unipolar Pt/Li{sub 0.06}Zn{sub 0.94}O/Pt resistive random access memory (RRAM) device is investigated. It is found that the RRAM device exhibits normal, 2-, 3-, and 4-step RESET behaviors under different compliance currents. The transport mechanism within the device is investigated by means of current-voltage curves, in-situ transmission electron microscopy, and electrochemical impedance spectroscopy. It is shown that the ion transport mechanism is dominated by Ohmic behavior under low electric fields and the Poole-Frenkel emission effect (normal RS behavior) or Li{sup +} ion diffusion (2-, 3-, and 4-step RESET behaviors) under high electric fields.

  15. Improvement in magnetic and microwave absorption properties of nano-Fe3O4@CFs composites using a modified multi-step EPD process

    NASA Astrophysics Data System (ADS)

    Movassagh-Alanagh, Farid; Bordbar Khiabani, Aidin; Salimkhani, Hamed

    2017-10-01

    In this research, structural, magnetic and microwave absorption properties of multifunctional nano-Fe3O4@carbon fibers (CFs) composites in the frequency range of 8.2-18 GHz were investigated. The nano-Fe3O4 particles (30 nm) were successfully prepared using co-precipitation method. These particles were then deposited on CFs using two conventional and modified multi-step electrophoretic deposition (EPD) processes to investigate the contributing effects of uniformity of coating on their magnetic and microwave absorption properties. The magnetic properties measurements represented that the coercivity (Hc), saturation magnetization (Ms) and residual magnetization (Mrs) values of the coated CFs using the conventional EPD process were 531.4 Oe, 15.2 emu/g and 5.2 emu/g, respectively, while for the CFs coated with modified EPD process these values were 167.8 Oe, 33.1 emu/g and 1.7 emu/g, respectively. It was found that by employing conventional EPD process, the maximum reflection loss (RL) of -9.87 dB was obtained for the composite containing 20 wt.% nano-Fe3O4@CFs and 80 wt.% epoxy-resin with the thickness of 2 mm, while by using the modified multi-step EPD process, due to enhancement of deposited coating quality, the achieved maximum RL was approximately increased by -0.64 dB and reached to -10.51 dB with an effective absorption bandwidth of about 4 GHz for the similar sample with the same thickness and weight ratio of nano-Fe3O4@CFs composites to epoxy 828.

  16. Contributions of Dopamine-Related Genes and Environmental Factors to Highly Sensitive Personality: A Multi-Step Neuronal System-Level Approach

    PubMed Central

    Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi

    2011-01-01

    Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics

  17. Prediction-based Termination Rule for Greedy Learning with Massive Data.

    PubMed

    Xu, Chen; Lin, Shaobo; Fang, Jian; Li, Runze

    2016-04-01

    The appearance of massive data has become increasingly common in contemporary scientific research. When sample size n is huge, classical learning methods become computationally costly for the regression purpose. Recently, the orthogonal greedy algorithm (OGA) has been revitalized as an efficient alternative in the context of kernel-based statistical learning. In a learning problem, accurate and fast prediction is often of interest. This makes an appropriate termination crucial for OGA. In this paper, we propose a new termination rule for OGA via investigating its predictive performance. The proposed rule is conceptually simple and convenient for implementation, which suggests an [Formula: see text] number of essential updates in an OGA process. It therefore provides an appealing route to conduct efficient learning for massive data. With a sample dependent kernel dictionary, we show that the proposed method is strongly consistent with an [Formula: see text] convergence rate to the oracle prediction. The promising performance of the method is supported by both simulation and real data examples.

  18. Unsupervised quantification of abdominal fat from CT images using Greedy Snakes

    NASA Astrophysics Data System (ADS)

    Agarwal, Chirag; Dallal, Ahmed H.; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory

    2017-02-01

    Adipose tissue has been associated with adverse consequences of obesity. Total adipose tissue (TAT) is divided into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). Intra-abdominal fat (VAT), located inside the abdominal cavity, is a major factor for the classic obesity related pathologies. Since direct measurement of visceral and subcutaneous fat is not trivial, substitute metrics like waist circumference (WC) and body mass index (BMI) are used in clinical settings to quantify obesity. Abdominal fat can be assessed effectively using CT or MRI, but manual fat segmentation is rather subjective and time-consuming. Hence, an automatic and accurate quantification tool for abdominal fat is needed. The goal of this study is to extract TAT, VAT and SAT fat from abdominal CT in a fully automated unsupervised fashion using energy minimization techniques. We applied a four step framework consisting of 1) initial body contour estimation, 2) approximation of the body contour, 3) estimation of inner abdominal contour using Greedy Snakes algorithm, and 4) voting, to segment the subcutaneous and visceral fat. We validated our algorithm on 952 clinical abdominal CT images (from 476 patients with a very wide BMI range) collected from various radiology departments of Geisinger Health System. To our knowledge, this is the first study of its kind on such a large and diverse clinical dataset. Our algorithm obtained a 3.4% error for VAT segmentation compared to manual segmentation. These personalized and accurate measurements of fat can complement traditional population health driven obesity metrics such as BMI and WC.

  19. A dedicated greedy pursuit algorithm for sparse spectral representation of music sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Aggarwal, Gagan

    2016-10-01

    A dedicated algorithm for sparse spectral representation of music sound is presented. The goal is to enable the representation of a piece of music signal, as a linear superposition of as few spectral components as possible. A representation of this nature is said to be sparse. In the present context sparsity is accomplished by greedy selection of the spectral components, from an overcomplete set called a dictionary. The proposed algorithm is tailored to be applied with trigonometric dictionaries. Its distinctive feature being that it avoids the need for the actual construction of the whole dictionary, by implementing the required operations via the Fast Fourier Transform. The achieved sparsity is theoretically equivalent to that rendered by the Orthogonal Matching Pursuit method. The contribution of the proposed dedicated implementation is to extend the applicability of the standard Orthogonal Matching Pursuit algorithm, by reducing its storage and computational demands. The suitability of the approach for producing sparse spectral models is illustrated by comparison with the traditional method, in the line of the Short Time Fourier Transform, involving only the corresponding orthonormal trigonometric basis.

  20. Greedy data transportation scheme with hard packet deadlines for wireless ad hoc networks.

    PubMed

    Lee, HyungJune

    2014-01-01

    We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services.

  1. PicXAA-R: Efficient structural alignment of multiple RNA sequences using a greedy approach

    PubMed Central

    2011-01-01

    Background Accurate and efficient structural alignment of non-coding RNAs (ncRNAs) has grasped more and more attentions as recent studies unveiled the significance of ncRNAs in living organisms. While the Sankoff style structural alignment algorithms cannot efficiently serve for multiple sequences, mostly progressive schemes are used to reduce the complexity. However, this idea tends to propagate the early stage errors throughout the entire process, thereby degrading the quality of the final alignment. For multiple protein sequence alignment, we have recently proposed PicXAA which constructs an accurate alignment in a non-progressive fashion. Results Here, we propose PicXAA-R as an extension to PicXAA for greedy structural alignment of ncRNAs. PicXAA-R efficiently grasps both folding information within each sequence and local similarities between sequences. It uses a set of probabilistic consistency transformations to improve the posterior base-pairing and base alignment probabilities using the information of all sequences in the alignment. Using a graph-based scheme, we greedily build up the structural alignment from sequence regions with high base-pairing and base alignment probabilities. Conclusions Several experiments on datasets with different characteristics confirm that PicXAA-R is one of the fastest algorithms for structural alignment of multiple RNAs and it consistently yields accurate alignment results, especially for datasets with locally similar sequences. PicXAA-R source code is freely available at: http://www.ece.tamu.edu/~bjyoon/picxaa/. PMID:21342569

  2. A dedicated greedy pursuit algorithm for sparse spectral representation of music sound.

    PubMed

    Rebollo-Neira, Laura; Aggarwal, Gagan

    2016-10-01

    A dedicated algorithm for sparse spectral representation of music sound is presented. The goal is to enable the representation of a piece of music signal as a linear superposition of as few spectral components as possible, without affecting the quality of the reproduction. A representation of this nature is said to be sparse. In the present context sparsity is accomplished by greedy selection of the spectral components, from an overcomplete set called a dictionary. The proposed algorithm is tailored to be applied with trigonometric dictionaries. Its distinctive feature being that it avoids the need for the actual construction of the whole dictionary, by implementing the required operations via the fast Fourier transform. The achieved sparsity is theoretically equivalent to that rendered by the orthogonal matching pursuit (OMP) method. The contribution of the proposed dedicated implementation is to extend the applicability of the standard OMP algorithm, by reducing its storage and computational demands. The suitability of the approach for producing sparse spectral representation is illustrated by comparison with the traditional method, in the line of the short time Fourier transform, involving only the corresponding orthonormal trigonometric basis.

  3. Greedy Data Transportation Scheme with Hard Packet Deadlines for Wireless Ad Hoc Networks

    PubMed Central

    Lee, HyungJune

    2014-01-01

    We present a greedy data transportation scheme with hard packet deadlines in ad hoc sensor networks of stationary nodes and multiple mobile nodes with scheduled trajectory path and arrival time. In the proposed routing strategy, each stationary ad hoc node en route decides whether to relay a shortest-path stationary node toward destination or a passing-by mobile node that will carry closer to destination. We aim to utilize mobile nodes to minimize the total routing cost as far as the selected route can satisfy the end-to-end packet deadline. We evaluate our proposed routing algorithm in terms of routing cost, packet delivery ratio, packet delivery time, and usability of mobile nodes based on network level simulations. Simulation results show that our proposed algorithm fully exploits the remaining time till packet deadline to turn into networking benefits of reducing the overall routing cost and improving packet delivery performance. Also, we demonstrate that the routing scheme guarantees packet delivery with hard deadlines, contributing to QoS improvement in various network services. PMID:25258736

  4. A Multi-Step Pathway Connecting Short Sleep Duration to Daytime Somnolence, Reduced Attention, and Poor Academic Performance: An Exploratory Cross-Sectional Study in Teenagers

    PubMed Central

    Perez-Lloret, Santiago; Videla, Alejandro J.; Richaudeau, Alba; Vigo, Daniel; Rossi, Malco; Cardinali, Daniel P.; Perez-Chada, Daniel

    2013-01-01

    Background: A multi-step causality pathway connecting short sleep duration to daytime somnolence and sleepiness leading to reduced attention and poor academic performance as the final result can be envisaged. However this hypothesis has never been explored. Objective: To explore consecutive correlations between sleep duration, daytime somnolence, attention levels, and academic performance in a sample of school-aged teenagers. Methods: We carried out a survey assessing sleep duration and daytime somnolence using the Pediatric Daytime Sleepiness Scale (PDSS). Sleep duration variables included week-days' total sleep time, usual bedtimes, and absolute weekdayto-weekend sleep time difference. Attention was assessed by d2 test and by the coding subtest from the WISC-IV scale. Academic performance was obtained from literature and math grades. Structural equation modeling was used to assess the independent relationships between these variables, while controlling for confounding effects of other variables, in one single model. Standardized regression weights (SWR) for relationships between these variables are reported. Results: Study sample included 1,194 teenagers (mean age: 15 years; range: 13-17 y). Sleep duration was inversely associated with daytime somnolence (SWR = -0.36, p < 0.01) while sleepiness was negatively associated with attention (SWR = -0.13, p < 0.01). Attention scores correlated positively with academic results (SWR = 0.18, p < 0.01). Daytime somnolence correlated negatively with academic achievements (SWR = -0.16, p < 0.01). The model offered an acceptable fit according to usual measures (RMSEA = 0.0548, CFI = 0.874, NFI = 0.838). A Sobel test confirmed that short sleep duration influenced attention through daytime somnolence (p < 0.02), which in turn influenced academic achievements through reduced attention (p < 0.002). Conclusions: Poor academic achievements correlated with reduced attention, which in turn was related to daytime somnolence. Somnolence

  5. Mad scientists, compassionate healers, and greedy egotists: the portrayal of physicians in the movies.

    PubMed Central

    Flores, Glenn

    2002-01-01

    Cinematic depictions of physicians potentially can affect public expectations and the patient-physician relationship, but little attention has been devoted to portrayals of physicians in movies. The objective of the study was the analysis of cinematic depictions of physicians to determine common demographic attributes of movie physicians, major themes, and whether portrayals have changed over time. All movies released on videotape with physicians as main characters and readily available to the public were viewed in their entirety. Data were collected on physician characteristics, diagnoses, and medical accuracy, and dialogue concerning physicians was transcribed. The results showed that in the 131 films, movie physicians were significantly more likely to be male (p < 0.00001), White (p < 0.00001), and < 40 years of age (p < 0.009). The proportion of women and minority film physicians has declined steadily in recent decades. Movie physicians are most commonly surgeons (33%), psychiatrists (26%), and family practitioners (18%). Physicians were portrayed negatively in 44% of movies, and since the 1960s positive portrayals declined while negative portrayals increased. Physicians frequently are depicted as greedy, egotistical, uncaring, and unethical, especially in recent films. Medical inaccuracies occurred in 27% of films. Compassion and idealism were common in early physician movies but are increasingly scarce in recent decades. A recurrent theme is the "mad scientist," the physician-researcher that values research more than patients' welfare. Portrayals of physicians as egotistical and materialistic have increased, whereas sexism and racism have waned. Movies from the past two decades have explored critical issues surrounding medical ethics and managed care. We conclude that negative cinematic portrayals of physicians are on the rise, which may adversely affect patient expectations and the patient-physician relationship. Nevertheless, films about physicians can

  6. Magmatically Greedy Reararc Volcanoes of the N. Tofua Segment of the Tonga Arc

    NASA Astrophysics Data System (ADS)

    Rubin, K. H.; Embley, R. W.; Arculus, R. J.; Lupton, J. E.

    2013-12-01

    Volcanism along the northernmost Tofua Arc is enigmatic because edifices of the arc's volcanic front are mostly, magmatically relatively anemic, despite the very high convergence rate of the Pacific Plate with this section of Tonga Arc. However, just westward of the arc front, in terrain generally thought of as part of the adjacent NE Lau Backarc Basin, lie a series of very active volcanoes and volcanic features, including the large submarine caldera Niuatahi (aka volcano 'O'), a large composite dacite lava flow terrain not obviously associated with any particular volcanic edifice, and the Mata volcano group, a series of 9 small elongate volcanoes in an extensional basin at the extreme NE corner of the Lau Basin. These three volcanic terrains do not sit on arc-perpendicular cross chains. Collectively, these volcanic features appear to be receiving a large proportion of the magma flux from the sub-Tonga/Lau mantle wedge, in effect 'stealing' this magma flux from the arc front. A second occurrence of such magma 'capture' from the arc front occurs in an area just to the south, on southernmost portion of the Fonualei Spreading Center. Erupted compositions at these 'magmatically greedy' volcanoes are consistent with high slab-derived fluid input into the wedge (particularly trace element abundances and volatile contents, e.g., see Lupton abstract this session). It is unclear how long-lived a feature this is, but the very presence of such hyperactive and areally-dispersed volcanism behind the arc front implies these volcanoes are not in fact part of any focused spreading/rifting in the Lau Backarc Basin, and should be thought of as 'reararc volcanoes'. Possible tectonic factors contributing to this unusually productive reararc environment are the high rate of convergence, the cold slab, the highly disorganized extension in the adjacent backarc, and the tear in the subducting plate just north of the Tofua Arc.

  7. Multi-Stepped Optogenetics: A Novel Strategy to Analyze Neural Network Formation and Animal Behaviors by Photo-Regulation of Local Gene Expression, Fluorescent Color and Neural Excitation

    NASA Astrophysics Data System (ADS)

    Hatta, Kohei; Nakajima, Yohei; Isoda, Erika; Itoh, Mariko; Yamamoto, Tamami

    The brain is one of the most complicated structures in nature. Zebrafish is a useful model to study development of vertebrate brain, because it is transparent at early embryonic stage and it develops rapidly outside of the body. We made a series of transgenic zebrafish expressing green-fluorescent protein related molecules, for example, Kaede and KikGR, whose green fluorescence can be irreversibly converted to red upon irradiation with ultra-violet (UV) or violet light, and Dronpa, whose green fluorescence is eliminated with strong blue light but can be reactivated upon irradiation with UV or violet-light. We have recently shown that infrared laser evoked gene operator (IR-LEGO) which causes a focused heat shock could locally induce these fluorescent proteins and the other genes. Neural cell migration and axonal pattern formation in living brain could be visualized by this technique. We also can express channel rhodopsine 2 (ChR2), a photoactivatable cation channel, or Natronomonas pharaonis halorhodopsin (NpHR), a photoactivatable chloride ion pump, locally in the nervous system by IR. Then, behaviors of these animals can be controlled by activating or silencing the local neurons by light. This novel strategy is useful in discovering neurons and circuits responsible for a wide variety of animal behaviors. We proposed to call this method ‘multi-stepped optogenetics’.

  8. Multi-step infrared macro-fingerprint features of ethanol extracts from different Cistanche species in China combined with HPLC fingerprint

    NASA Astrophysics Data System (ADS)

    Xu, Rong; Sun, Suqin; Zhu, Weicheng; Xu, Changhua; Liu, Yougang; Shen, Liang; Shi, Yue; Chen, Jun

    2014-07-01

    The genus Cistanche generally has four species in China, including C. deserticola (CD), C. tubulosa (CT), C. salsa (CS) and C. sinensis (CSN), among which CD and CT are official herbal sources of Cistanche Herba (CH). To clarify the sources of CH and ensure the clinical efficacy and safety, a multi-step IR macro-fingerprint method was developed to analyze and evaluate the ethanol extracts of the four species. Through this method, the four species were distinctively distinguished, and the main active components phenylethanoid glycosides (PhGs) were estimated rapidly according to the fingerprint features in the original IR spectra, second derivative spectra, correlation coefficients and 2D-IR correlation spectra. The exclusive IR fingerprints in the spectra including the positions, shapes and numbers of peaks indicated that constitutes of CD were the most abundant, and CT had the highest level of PhGs. The results deduced by some macroscopic features in IR fingerprint were in agreement with the HPLC fingerprint of PhGs from the four species, but it should be noted that the IR provided more chemical information than HPLC. In conclusion, with the advantages of high resolution, cost effective and speediness, the macroscopic IR fingerprint method should be a promising analytical technique for discriminating extremely similar herbal medicine, monitoring and tracing the constituents of different extracts and even for quality control of the complex systems such as TCM.

  9. Combined state-adding and state-deleting approaches to type III multi-step rationally extended potentials: Applications to ladder operators and superintegrability

    SciTech Connect

    Marquette, Ian; Quesne, Christiane

    2014-11-15

    Type III multi-step rationally extended harmonic oscillator and radial harmonic oscillator potentials, characterized by a set of k integers m{sub 1}, m{sub 2}, ⋯, m{sub k}, such that m{sub 1} < m{sub 2} < ⋯ < m{sub k} with m{sub i} even (resp. odd) for i odd (resp. even), are considered. The state-adding and state-deleting approaches to these potentials in a supersymmetric quantum mechanical framework are combined to construct new ladder operators. The eigenstates of the Hamiltonians are shown to separate into m{sub k} + 1 infinite-dimensional unitary irreducible representations of the corresponding polynomial Heisenberg algebras. These ladder operators are then used to build a higher-order integral of motion for seven new infinite families of superintegrable two-dimensional systems separable in cartesian coordinates. The finite-dimensional unitary irreducible representations of the polynomial algebras of such systems are directly determined from the ladder operator action on the constituent one-dimensional Hamiltonian eigenstates and provide an algebraic derivation of the superintegrable systems whole spectrum including the level total degeneracies.

  10. Can the Ordered Multi-Stepping Over Hoop test be useful for predicting fallers among older people? A preliminary 1 year cohort study.

    PubMed

    Tsutsumimoto, Kota; Doi, Takehiko; Misu, Shogo; Ono, Rei; Hirata, Soichiro

    2013-08-01

    To prevent falls among older people, we developed a new fall-risk assessment, the "Ordered Multi-Stepping Over Hoop (OMO)" test. The aims of this study were preliminary: to investigate the association of the OMO with cognitive and physical function and to investigate whether the OMO could predict incidents of falling. Fifty-nine community-dwelling older people (mean age = 88.0 ± 0.87, female = 49) were recruited. We assessed cognitive and physical function including the OMO test at baseline and monitored the falls of participants during a 12-month follow-up period from the baseline. We investigated whether the OMO was associated with cognitive function, physical function, and incidents of falling. To investigate whether the OMO could predict incidents of falling, a receiver operating characteristic analysis was conducted. The OMO time in fallers was significantly slower than for non-fallers. There were significant correlations between slower OMO times and lower physical functions and executive function. The area under the ROC curve in the OMO was 0.71 (p < 0.05). Times above 21.9 s for the OMO identified those more likely to fall. The OMO time was correlated with cognitive function, physical function, and incidents of falling. Our preliminary study indicates that the OMO may help to make a distinction between fallers and non-fallers among older people as effectively as other tests.

  11. In situ UV curable 3D printing of multi-material tri-legged soft bot with spider mimicked multi-step forward dynamic gait

    NASA Astrophysics Data System (ADS)

    Zeb Gul, Jahan; Yang, Bong-Su; Yang, Young Jin; Chang, Dong Eui; Choi, Kyung Hyun

    2016-11-01

    Soft bots have the expedient ability of adopting intricate postures and fitting in complex shapes compared to mechanical robots. This paper presents a unique in situ UV curing three-dimensional (3D) printed multi-material tri-legged soft bot with spider mimicked multi-step dynamic forward gait using commercial bio metal filament (BMF) as an actuator. The printed soft bot can produce controllable forward motion in response to external signals. The fundamental properties of BMF, including output force, contractions at different frequencies, initial loading rate, and displacement-rate are verified. The tri-pedal soft bot CAD model is designed inspired by spider’s legged structure and its locomotion is assessed by simulating strain and displacement using finite element analysis. A customized rotational multi-head 3D printing system assisted with multiple wavelength’s curing lasers is used for in situ fabrication of tri-pedal soft-bot using two flexible materials (epoxy and polyurethane) in three layered steps. The size of tri-pedal soft-bot is 80 mm in diameter and each pedal’s width and depth is 5 mm × 5 mm respectively. The maximum forward speed achieved is 2.7 mm s-1 @ 5 Hz with input voltage of 3 V and 250 mA on a smooth surface. The fabricated tri-pedal soft bot proved its power efficiency and controllable locomotion at three input signal frequencies (1, 2, 5 Hz).

  12. A Multi-Step miRNA-mRNA Regulatory Network Construction Approach Identifies Gene Signatures Associated with Endometrioid Endometrial Carcinoma

    PubMed Central

    Xiong, Hanzhen; Li, Qiulian; Chen, Ruichao; Liu, Shaoyan; Lin, Qiongyan; Xiong, Zhongtang; Jiang, Qingping; Guo, Linlang

    2016-01-01

    We aimed to identify endometrioid endometrial carcinoma (EEC)-related gene signatures using a multi-step miRNA-mRNA regulatory network construction approach. Pathway analysis showed that 61 genes were enriched on many carcinoma-related pathways. Among the 14 highest scoring gene signatures, six genes had been previously shown to be endometrial carcinoma. By qRT-PCR and next generation sequencing, we found that a gene signature (CPEB1) was significantly down-regulated in EEC tissues, which may be caused by hsa-miR-183-5p up-regulation. In addition, our literature surveys suggested that CPEB1 may play an important role in EEC pathogenesis by regulating the EMT/p53 pathway. The miRNA-mRNA network is worthy of further investigation with respect to the regulatory mechanisms of miRNAs in EEC. CPEB1 appeared to be a tumor suppressor in EEC. Our results provided valuable guidance for the functional study at the cellular level, as well as the EEC mouse models. PMID:27271671

  13. A multi-step approach to improving NASA Earth Science data access and use for decision support through online and hands-on training

    NASA Astrophysics Data System (ADS)

    Prados, A. I.; Gupta, P.; Mehta, A. V.; Schmidt, C.; Blevins, B.; Carleton-Hug, A.; Barbato, D.

    2014-12-01

    NASA's Applied Remote Sensing Training Program (ARSET), http://arset.gsfc.nasa.gov, within NASA's Applied Sciences Program, has been providing applied remote sensing training since 2008. The goals of the program are to develop the technical and analytical skills necessary to utilize NASA resources for decision-support, and to help end-users navigate through the vast data resources freely available. We discuss our multi-step approach to improving data access and use of NASA satellite and model data for air quality, water resources, disaster, and land management. The program has reached over 1600 participants world wide using a combined online and interactive approach. We will discuss lessons learned as well as best practices and success stories in improving the use of NASA Earth Science resources archived at multiple data centers by end-users in the private and public sectors. ARSET's program evaluation method for improving the program and assessing the benefits of trainings to U.S and international organizations will also be described.

  14. The multi-step phosphorelay mechanism of unorthodox two-component systems in E. coli realizes ultrasensitivity to stimuli while maintaining robustness to noises.

    PubMed

    Kim, Jeong-Rae; Cho, Kwang-Hyun

    2006-12-01

    E. coli has two-component systems composed of histidine kinase proteins and response regulator proteins. For a given extracellular stimulus, a histidine kinase senses the stimulus, autophosphorylates and then passes the phosphates to the cognate response regulators. The histidine kinase in an orthodox two-component system has only one histidine domain where the autophosphorylation occurs, but a histidine kinase in some unusual two-component systems (unorthodox two-component systems) has two histidine domains and one aspartate domain. So, the unorthodox two-component systems have more complex phosphorelay mechanisms than orthodox two-component systems. In general, the two-component systems are required to promptly respond to external stimuli for survival of E. coli. In this respect, the complex multi-step phosphorelay mechanism seems to be disadvantageous, but there are several unorthodox two-component systems in E. coli. In this paper, we investigate the reason why such unorthodox two-component systems are present in E. coli. For this purpose, we have developed simplified mathematical models of both orthodox and unorthodox two-component systems and analyzed their dynamical characteristics through extensive computer simulations. We have finally revealed that the unorthodox two-component systems realize ultrasensitive responses to external stimuli and also more robust responses to noises than the orthodox two-component systems.

  15. Cryopreservation of encapsulated liver spheroids using a cryogen-free cooler: high functional recovery using a multi-step cooling profile.

    PubMed

    Massie, I; Selden, C; Morris, J; Hodgson, H; Fuller, B

    2011-01-01

    Acute liver failure has high mortality with unpredictable onset. A bioartificial liver, comprising alginate-encapsulated HepG2 spheroids, could temporarily replace liver function but must be cryopreservable. For clinical use, contamination risks from liquid coolants for cryopreservation and storage should be minimized. A cryogen-free cooler was compared to nitrogen vapour-controlled cryopreservation of alginate-encapsulated liver cell spheroids (AELS). AELS were cooled using a multi-step, slow-cooling profile in 12 percent v/v Me2SO Celsior and stored in liquid nitrogen; temperatures were recorded throughout, and the AELS were assayed at 24, 48 and 72 hours post-warming and results compared to unfrozen control values. Viability was assessed by fluorescent staining and quantified using image analysis; cell numbers were quantified using nuclear counts, and cell function using albumin synthesis. The cryogen-free cooler performed the cooling profile as desired, apart from one step requiring a rapid cool ramp. Viability, cell numbers and function were similarly decreased in both cryopreserved groups to about 90 percent, 70 percent and 65 percent of the controls respectively. This technology offers a clinic alternative to liquid nitrogen-coolant cryopreservation.

  16. From highly polluted Zn-rich acid mine drainage to non-metallic waters: implementation of a multi-step alkaline passive treatment system to remediate metal pollution.

    PubMed

    Macías, Francisco; Caraballo, Manuel A; Rötting, Tobias S; Pérez-López, Rafael; Nieto, José Miguel; Ayora, Carlos

    2012-09-01

    Complete metal removal from highly-polluted acid mine drainage was attained by the use of a pilot multi-step passive remediation system. The remediation strategy employed can conceptually be subdivided into a first section where the complete trivalent metal removal was achieved by the employment of a previously tested limestone-based passive remediation technology followed by the use of a novel reactive substrate (caustic magnesia powder dispersed in a wood shavings matrix) obtaining a total divalent metal precipitation. This MgO-step was capable to abate high concentrations of Zn together with Mn, Cd, Co and Ni below the recommended limits for drinking waters. A reactive transport model anticipates that 1 m(3) of MgO-DAS (1 m thick × 1 m(2) section) would be able to treat a flow of 0.5 L/min of a highly acidic water (total acidity of 788 mg/L CaCO(3)) for more than 3 years.

  17. Molecularly imprinted polymers for triazine herbicides prepared by multi-step swelling and polymerization method. Their application to the determination of methylthiotriazine herbicides in river water.

    PubMed

    Sambe, Haruyo; Hoshina, Kaori; Haginaka, Jun

    2007-06-08

    Uniformly-sized, molecularly imprinted polymers (MIPs) for atrazine, ametryn and irgarol were prepared by a multi-step swelling and polymerization method using ethylene glycol dimethacrylate as a cross-linker and methacrylic acid (MAA), 2-(trifluoromethyl) acrylic acid (TFMAA) or 4-vinylpyridine either as a functional monomer or not. The MIP for atrazine prepared using MAA showed good molecular recognition abilities for chlorotriazine herbicides, while the MIPs for ametryn and irgarol prepared using TFMAA showed excellent molecular recognition abilities for methylthiotriazine herbicides. A restricted access media-molecularly imprinted polymer (RAM-MIP) for irgarol was prepared followed by in situ hydrophilic surface modification using glycerol dimethacrylate and glycerol monomethacrylate as hydrophilic monomers. The RAM-MIP was applied to selective pretreatment and enrichment of methylthiotriazine herbicides, simetryn, ametryn and prometryn, in river water, followed by their separation and UV detection via column-switching HPLC. The calibration graphs of these compounds showed good linearity in the range of 50-500 pg/mL (r > 0.999) with a 100 mL loading of a river water sample. The quantitation limits of simetryn, ametryn and prometryn were 50 pg/mL, and the detection limits were 25 pg/mL. The recoveries of simetryn, ametryn and prometryn at 50 pg/mL were 101%, 95.6% and 95.1%, respectively. This method was successfully applied for the simultaneous determination of simetryn, ametryn and prometryn in river water.

  18. Inter-Greedy technique for fusion of different carotid segmentation boundaries leading to high-performance IMT measurement.

    PubMed

    Molinari, Filippo; Zeng, Guang; Suri, Jasjit S

    2010-01-01

    User-based estimation of intima-media thickness (IMT) of carotid arteries leads to subjectivity in its decision support systems, while being used as a cardiovascular risk marker. During automated computer-based decision support, we had developed segmentation strategies that follow three main courses of our contributions: (a) signal processing approach combined with snakes and fuzzy K-means (CULEXsa), (b) integrated approach based on seed and line detection followed by probability based connectivity and classification (CALEXsa), and (c) morphological approach with watershed transform and fitting (WS). We have extended this fusion concept by taking merits of these multiple boundaries, so called, Inter-Greedy (IG) approach. Starting from the technique with the overall least system error (the snake-based one), we iteratively swapped the vertices of the lumen-intima/media-adventitia (LI/MA) profiles until we minimized its overall distance with respect to ground truth. The fusion boundary was the IG boundary. The mean error of Inter-Greedy technique (evaluated on 200 images) yielded 0.32 ± 0.44 pixel (20.0 ± 27.5 microm) for the LI boundary (a 33.3% ± 5.6% improvement over initial best performing technique) and 0.21 ± 0.34 pixel (13.1 ± 21.3 microm) for MA boundary (a 32.3% ± 6.7% improvement). IMT measurement error for Greedy method was 0.74 ± 0.75 pixel (46.3 ± 46.9 microm), a 43.5% ± 2.4% improvement.

  19. Greedy feature selection for glycan chromatography data with the generalized Dirichlet distribution

    PubMed Central

    2013-01-01

    Background Glycoproteins are involved in a diverse range of biochemical and biological processes. Changes in protein glycosylation are believed to occur in many diseases, particularly during cancer initiation and progression. The identification of biomarkers for human disease states is becoming increasingly important, as early detection is key to improving survival and recovery rates. To this end, the serum glycome has been proposed as a potential source of biomarkers for different types of cancers. High-throughput hydrophilic interaction liquid chromatography (HILIC) technology for glycan analysis allows for the detailed quantification of the glycan content in human serum. However, the experimental data from this analysis is compositional by nature. Compositional data are subject to a constant-sum constraint, which restricts the sample space to a simplex. Statistical analysis of glycan chromatography datasets should account for their unusual mathematical properties. As the volume of glycan HILIC data being produced increases, there is a considerable need for a framework to support appropriate statistical analysis. Proposed here is a methodology for feature selection in compositional data. The principal objective is to provide a template for the analysis of glycan chromatography data that may be used to identify potential glycan biomarkers. Results A greedy search algorithm, based on the generalized Dirichlet distribution, is carried out over the feature space to search for the set of “grouping variables” that best discriminate between known group structures in the data, modelling the compositional variables using beta distributions. The algorithm is applied to two glycan chromatography datasets. Statistical classification methods are used to test the ability of the selected features to differentiate between known groups in the data. Two well-known methods are used for comparison: correlation-based feature selection (CFS) and recursive partitioning (rpart). CFS

  20. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  1. Patterns of Use, Cessation Behavior and Socio-Demographic Factors Associated with Smoking in Saudi Arabia: a Cross- Sectional Multi-Step Study.

    PubMed

    Abdelwahab, Siddig Ibarhim; El-Setohy, Maged; Alsharqi, Abdalla; Elsanosy, Rashad; Mohammed, Umar Yagoub

    2016-01-01

    Smoking is accountable for the fatality of a substantial number of persons and increases the likelihood of cancer and cardiovascular diseases. Although data have shown high prevalence rates of cigarette smoking in Saudi Arabia, relatively little is known about the broader scope. The objectives of this study were to investigate socio-demographic factors, patterns of use and cessation behavior associated with smoking in Saudi Arabia (KSA). The study utilized a cross-sectional, multi-step design of sampling. Residents (N=1,497; aged 15 years and older) were recruited from seven administrative areas in Southwest Saudi Arabia. A pretested questionnaire was utilized to obtain data on participant cigarette smoking, including their daily use, age, education, income, marital status and employment status. The current study is the first of its kind to gather data cessation behavior of Saudi subjects. With the exception of 1.5% females, all the respondents were male. The majority of the respondents were married, had a university level of education, were employed, and were younger than 34 years old. The same trends were also observed among smokers' samples. The current prevalence of cigarette smoking was 49.2% and 65.7% of smokers had smoking at less than 18 years of age. The mean daily use amongst smokers was 7.98 cigarettes (SD=4.587). More than 50% of the study sample had tried at least once to quit smoking. However, 42% of the smokers participating had never. On the other hand, about 25% of the respondents were willing to consider quitting smoking in the future. Modeling of cigarette smoking suggested that the most significant independent predictors of smoking behavior were geographic area, gender, marital status, education, job and age. Considerable variation in smoking prevalence was noted related with participant sociodemographics. Findings recommend the necessity for control and intervention programs in Saudi community.

  2. A multi-step pathway connecting short sleep duration to daytime somnolence, reduced attention, and poor academic performance: an exploratory cross-sectional study in teenagers.

    PubMed

    Perez-Lloret, Santiago; Videla, Alejandro J; Richaudeau, Alba; Vigo, Daniel; Rossi, Malco; Cardinali, Daniel P; Perez-Chada, Daniel

    2013-05-15

    A multi-step causality pathway connecting short sleep duration to daytime somnolence and sleepiness leading to reduced attention and poor academic performance as the final result can be envisaged. However this hypothesis has never been explored. To explore consecutive correlations between sleep duration, daytime somnolence, attention levels, and academic performance in a sample of school-aged teenagers. We carried out a survey assessing sleep duration and daytime somnolence using the Pediatric Daytime Sleepiness Scale (PDSS). Sleep duration variables included week-days' total sleep time, usual bedtimes, and absolute weekday to-weekend sleep time difference. Attention was assessed by d2 test and by the coding subtest from the WISC-IV scale. Academic performance was obtained from literature and math grades. Structural equation modeling was used to assess the independent relationships between these variables, while controlling for confounding effects of other variables, in one single model. Standardized regression weights (SWR) for relationships between these variables are reported. Study sample included 1,194 teenagers (mean age: 15 years; range: 13-17 y). Sleep duration was inversely associated with daytime somnolence (SWR = -0.36, p < 0.01) while sleepiness was negatively associated with attention (SWR = -0.13, p < 0.01). Attention scores correlated positively with academic results (SWR = 0.18, p < 0.01). Daytime somnolence correlated negatively with academic achievements (SWR = -0.16, p < 0.01). The model offered an acceptable fit according to usual measures (RMSEA = 0.0548, CFI = 0.874, NFI = 0.838). A Sobel test confirmed that short sleep duration influenced attention through daytime somnolence (p < 0.02), which in turn influenced academic achievements through reduced attention (p < 0.002). Poor academic achievements correlated with reduced attention, which in turn was related to daytime somnolence. Somnolence correlated with short sleep duration.

  3. Serological diagnosis of autoimmune bullous skin diseases: prospective comparison of the BIOCHIP mosaic-based indirect immunofluorescence technique with the conventional multi-step single test strategy.

    PubMed

    van Beek, Nina; Rentzsch, Kristin; Probst, Christian; Komorowski, Lars; Kasperkiewicz, Michael; Fechner, Kai; Bloecker, Inga M; Zillikens, Detlef; Stöcker, Winfried; Schmidt, Enno

    2012-08-09

    Various antigen-specific immunoassays are available for the serological diagnosis of autoimmune bullous diseases. However, a spectrum of different tissue-based and monovalent antigen-specific assays is required to establish the diagnosis. BIOCHIP mosaics consisting of different antigen substrates allow polyvalent immunofluorescence (IF) tests and provide antibody profiles in a single incubation. Slides for indirect IF were prepared, containing BIOCHIPS with the following test substrates in each reaction field: monkey esophagus, primate salt-split skin, antigen dots of tetrameric BP180-NC16A as well as desmoglein 1-, desmoglein 3-, and BP230gC-expressing human HEK293 cells. This BIOCHIP mosaic was probed using a large panel of sera from patients with pemphigus vulgaris (PV, n=65), pemphigus foliaceus (PF, n=50), bullous pemphigoid (BP, n=42), and non-inflammatory skin diseases (n=97) as well as from healthy blood donors (n=100). Furthermore, to evaluate the usability in routine diagnostics, 454 consecutive sera from patients with suspected immunobullous disorders were prospectively analyzed in parallel using a) the IF BIOCHIP mosaic and b) a panel of single antibody assays as commonly used by specialized centers. Using the BIOCHIP mosaic, sensitivities of the desmoglein 1-, desmoglein 3-, and NC16A-specific substrates were 90%, 98.5% and 100%, respectively. BP230 was recognized by 54% of the BP sera. Specificities ranged from 98.2% to 100% for all substrates. In the prospective study, a high agreement was found between the results obtained by the BIOCHIP mosaic and the single test panel for the diagnosis of BP, PV, PF, and sera without serum autoantibodies (Cohen's κ between 0.88 and 0.97). The BIOCHIP mosaic contains sensitive and specific substrates for the indirect IF diagnosis of BP, PF, and PV. Its diagnostic accuracy is comparable with the conventional multi-step approach. The highly standardized and practical BIOCHIP mosaic will facilitate the serological

  4. Evaluation of glycodendron and synthetically-modified dextran clearing agents for multi-step targeting of radioisotopes for molecular imaging and radioimmunotherapy

    PubMed Central

    Cheal, Sarah M.; Yoo, Barney; Boughdad, Sarah; Punzalan, Blesida; Yang, Guangbin; Dilhas, Anna; Torchon, Geralda; Pu, Jun; Axworthy, Don B.; Zanzonico, Pat; Ouerfelli, Ouathek; Larson, Steven M.

    2014-01-01

    A series of N-acetylgalactosamine-dendrons (NAG-dendrons) and dextrans bearing biotin moieties were compared for their ability to complex with and sequester circulating bispecific anti-tumor antibody (scFv4) streptavidin (SA) fusion protein (scFv4-SA) in vivo, to improve tumor to normal tissue concentration ratios for targeted radioimmunotherapy and diagnosis. Specifically, a total of five NAG-dendrons employing a common synthetic scaffold structure containing 4, 8, 16, or 32 carbohydrate residues and a single biotin moiety were prepared (NAGB), and for comparative purposes, a biotinylated-dextran with average molecular weight (MW) of 500 kD was synthesized from amino-dextran (DEXB). One of the NAGB compounds, CA16, has been investigated in humans; our aim was to determine if other NAGB analogs (e.g. CA8 or CA4) were bioequivalent to CA16 and/or better suited as MST reagents. In vivo studies included dynamic positron-emission tomography (PET) imaging of 124I-labelled-scFv4-SA clearance and dual-label biodistribution studies following multi-step targeting (MST) directed at subcutaneous (s.c.) human colon adenocarcinoma xenografts in mice. The MST protocol consists of three injections: first, a bispecific antibody specific for an anti-tumor associated glycoprotein (TAG-72) single chain genetically-fused with SA (scFv4-SA); second, CA16 or other clearing agent; and third, radiolabeled biotin. We observed using PET imaging of 124I-labelled-scFv4-SA clearance that the spatial arrangement of ligands conjugated to NAG (i.e. biotin) can impact the binding to antibody in circulation and subsequent liver uptake of the NAG-antibody complex. Also, NAGB CA32-LC or CA16-LC can be utilized during MST to achieve comparable tumor- to-blood ratios and absolute tumor uptake seen previously with CA16. Finally, DEXB was equally effective as NAGB CA32-LC at lowering scFv4-SA in circulation, but at the expense of reducing absolute tumor uptake of radiolabeled biotin. PMID:24219178

  5. Serological diagnosis of autoimmune bullous skin diseases: Prospective comparison of the BIOCHIP mosaic-based indirect immunofluorescence technique with the conventional multi-step single test strategy

    PubMed Central

    2012-01-01

    Background Various antigen-specific immunoassays are available for the serological diagnosis of autoimmune bullous diseases. However, a spectrum of different tissue-based and monovalent antigen-specific assays is required to establish the diagnosis. BIOCHIP mosaics consisting of different antigen substrates allow polyvalent immunofluorescence (IF) tests and provide antibody profiles in a single incubation. Methods Slides for indirect IF were prepared, containing BIOCHIPS with the following test substrates in each reaction field: monkey esophagus, primate salt-split skin, antigen dots of tetrameric BP180-NC16A as well as desmoglein 1-, desmoglein 3-, and BP230gC-expressing human HEK293 cells. This BIOCHIP mosaic was probed using a large panel of sera from patients with pemphigus vulgaris (PV, n = 65), pemphigus foliaceus (PF, n = 50), bullous pemphigoid (BP, n = 42), and non-inflammatory skin diseases (n = 97) as well as from healthy blood donors (n = 100). Furthermore, to evaluate the usability in routine diagnostics, 454 consecutive sera from patients with suspected immunobullous disorders were prospectively analyzed in parallel using a) the IF BIOCHIP mosaic and b) a panel of single antibody assays as commonly used by specialized centers. Results Using the BIOCHIP mosaic, sensitivities of the desmoglein 1-, desmoglein 3-, and NC16A-specific substrates were 90%, 98.5% and 100%, respectively. BP230 was recognized by 54% of the BP sera. Specificities ranged from 98.2% to 100% for all substrates. In the prospective study, a high agreement was found between the results obtained by the BIOCHIP mosaic and the single test panel for the diagnosis of BP, PV, PF, and sera without serum autoantibodies (Cohen’s κ between 0.88 and 0.97). Conclusions The BIOCHIP mosaic contains sensitive and specific substrates for the indirect IF diagnosis of BP, PF, and PV. Its diagnostic accuracy is comparable with the conventional multi-step approach. The highly

  6. Hierarchical models and iterative optimization of hybrid systems

    SciTech Connect

    Rasina, Irina V.; Baturina, Olga V.; Nasatueva, Soelma N.

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  7. An iterated greedy algorithm for the single-machine total weighted tardiness problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Deng, Guanlong; Gu, Xingsheng

    2014-03-01

    This article presents an enhanced iterated greedy (EIG) algorithm that searches both insert and swap neighbourhoods for the single-machine total weighted tardiness problem with sequence-dependent setup times. Novel elimination rules and speed-ups are proposed for the swap move to make the employment of swap neighbourhood worthwhile due to its reduced computational expense. Moreover, a perturbation operator is newly designed as a substitute for the existing destruction and construction procedures to prevent the search from being attracted to local optima. To validate the proposed algorithm, computational experiments are conducted on a benchmark set from the literature. The results show that the EIG outperforms the existing state-of-the-art algorithms for the considered problem.

  8. A Greedy Scanning Data Collection Strategy for Large-Scale Wireless Sensor Networks with a Mobile Sink

    PubMed Central

    Zhu, Chuan; Zhang, Sai; Han, Guangjie; Jiang, Jinfang; Rodrigues, Joel J. P. C.

    2016-01-01

    Mobile sink is widely used for data collection in wireless sensor networks. It can avoid ‘hot spot’ problems but energy consumption caused by multihop transmission is still inefficient in real-time application scenarios. In this paper, a greedy scanning data collection strategy (GSDCS) is proposed, and we focus on how to reduce routing energy consumption by shortening total length of routing paths. We propose that the mobile sink adjusts its trajectory dynamically according to the changes of network, instead of predetermined trajectory or random walk. Next, the mobile sink determines which area has more source nodes, then it moves toward this area. The benefit of GSDCS is that most source nodes are no longer needed to upload sensory data for long distances. Especially in event-driven application scenarios, when event area changes, the mobile sink could arrive at the new event area where most source nodes are located currently. Hence energy can be saved. Analytical and simulation results show that compared with existing work, our GSDCS has a better performance in specific application scenarios. PMID:27608022

  9. Multi-step surface functionalization of polyimide based evanescent wave photonic biosensors and application for DNA hybridization by Mach-Zehnder interferometer.

    PubMed

    Melnik, Eva; Bruck, Roman; Hainberger, Rainer; Lämmerhofer, Michael

    2011-08-12

    The process of surface functionalization involving silanization, biotinylation and streptavidin bonding as platform for biospecific ligand immobilization was optimized for thin film polyimide spin-coated silicon wafers, of which the polyimide film serves as a wave guiding layer in evanescent wave photonic biosensors. This type of optical sensors make great demands on the materials involved as well as on the layer properties, such as the optical quality, the layer thickness and the surface roughness. In this work we realized the binding of a 3-mercaptopropyl trimethoxysilane on an oxygen plasma activated polyimide surface followed by subsequent derivatization of the reactive thiol groups with maleimide-PEG(2)-biotin and immobilization of streptavidin. The progress of the functionalization was monitored by using different fluorescence labels for optimization of the chemical derivatization steps. Further, X-ray photoelectron spectroscopy and atomic force microscopy were utilized for the characterization of the modified surface. These established analytical methods allowed to derive information like chemical composition of the surface, surface coverage with immobilized streptavidin, as well as parameters of the surface roughness. The proposed functionalization protocol furnished a surface density of 144 fmol mm(-2) streptavidin with good reproducibility (13.9% RSD, n=10) and without inflicted damage to the surface. This surface modification was applied to polyimide based Mach-Zehnder interferometer sensors to realize a real-time measurement of streptavidin binding validating the functionality of the MZI biosensor. Subsequently, this streptavidin surface was employed to immobilize biotinylated single-stranded DNA and utilized for monitoring of selective DNA hybridization. These proved the usability of polyimide based evanescent photonic devices for biosensing application. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Production of 7α,15α-diOH-DHEA from dehydroepiandrosterone by Colletotrichum lini ST-1 through integrating glucose-feeding with multi-step substrate addition strategy.

    PubMed

    Li, Cong; Li, Hui; Sun, Jin; Zhang, XinYue; Shi, Jinsong; Xu, Zhenghong

    2016-08-01

    Hydroxylation of dehydroepiandrosterone (DHEA) to 3β,7α,15α-trihydroxy-5-androstene-17-one (7α,15α-diOH-DHEA) by Colletotrichum lini ST-1 is an essential step in the synthesis of many steroidal drugs, while low DHEA concentration and 7α,15α-diOH-DHEA production are tough problems to be solved urgently in industry. In this study, the significant improvement of 7α,15α-diOH-DHEA yield in 5-L stirred fermenter with 15 g/L DHEA was achieved. To maintain a sufficient quantity of glucose for the bioconversion, glucose of 15 g/L was fed at 18 h, the 7α,15α-diOH-DHEA yield and dry cell weight were increased by 17.7 and 30.9 %, respectively. Moreover, multi-step DHEA addition strategy was established to diminish DHEA toxicity to C. lini, and the 7α,15α-diOH-DHEA yield raised to 53.0 %. Further, a novel strategy integrating glucose-feeding with multi-step addition of DHEA was carried out and the product yield increased to 66.6 %, which was the highest reported 7α,15α-diOH-DHEA production in 5-L stirred fermenter. Meanwhile, the conversion course was shortened to 44 h. This strategy would provide a possible way in enhancing the 7α,15α-diOH-DHEA yield in pharmaceutical industry.

  11. Discrete Particle Swarm Optimization Routing Protocol for Wireless Sensor Networks with Multiple Mobile Sinks

    PubMed Central

    Yang, Jin; Liu, Fagui; Cao, Jianneng; Wang, Liangming

    2016-01-01

    Mobile sinks can achieve load-balancing and energy-consumption balancing across the wireless sensor networks (WSNs). However, the frequent change of the paths between source nodes and the sinks caused by sink mobility introduces significant overhead in terms of energy and packet delays. To enhance network performance of WSNs with mobile sinks (MWSNs), we present an efficient routing strategy, which is formulated as an optimization problem and employs the particle swarm optimization algorithm (PSO) to build the optimal routing paths. However, the conventional PSO is insufficient to solve discrete routing optimization problems. Therefore, a novel greedy discrete particle swarm optimization with memory (GMDPSO) is put forward to address this problem. In the GMDPSO, particle’s position and velocity of traditional PSO are redefined under discrete MWSNs scenario. Particle updating rule is also reconsidered based on the subnetwork topology of MWSNs. Besides, by improving the greedy forwarding routing, a greedy search strategy is designed to drive particles to find a better position quickly. Furthermore, searching history is memorized to accelerate convergence. Simulation results demonstrate that our new protocol significantly improves the robustness and adapts to rapid topological changes with multiple mobile sinks, while efficiently reducing the communication overhead and the energy consumption. PMID:27428971

  12. Combinatorial optimization methods for disassembly line balancing

    NASA Astrophysics Data System (ADS)

    McGovern, Seamus M.; Gupta, Surendra M.

    2004-12-01

    Disassembly takes place in remanufacturing, recycling, and disposal with a line being the best choice for automation. The disassembly line balancing problem seeks a sequence which: minimizes workstations, ensures similar idle times, and is feasible. Finding the optimal balance is computationally intensive due to factorial growth. Combinatorial optimization methods hold promise for providing solutions to the disassembly line balancing problem, which is proven to belong to the class of NP-complete problems. Ant colony optimization, genetic algorithm, and H-K metaheuristics are presented and compared along with a greedy/hill-climbing heuristic hybrid. A numerical study is performed to illustrate the implementation and compare performance. Conclusions drawn include the consistent generation of optimal or near-optimal solutions, the ability to preserve precedence, the speed of the techniques, and their practicality due to ease of implementation.

  13. First Experience with Clinical-Grade [18F]FPP (RGD)2: An Automated Multi-step Radiosynthesis for Clinical PET Studies

    PubMed Central

    Chin, Frederick T.; Shen, Bin; Liu, Shuanglong; Berganos, Rhona A.; Chang, Edwin; Mittra, Erik; Chen, Xiaoyuan; Gambhir, Sanjiv S.

    2013-01-01

    Purpose A reliable and routine process to introduce a new 18F-labeled dimeric RGD-peptide tracer ([18F]FPP(RGD)2) for noninvasive imaging of αvβ3 expression in tumors needed to be developed so the tracer could be evaluated for the first time in man. Clinical-grade [18F]FPP (RGD)2 was screened in mouse prior to our first pilot study in human. Procedures [18F]FPP(RGD)2 was synthesized by coupling 4-nitrophenyl-2-[18F]fluoropropionate ([18F]NPE) with the dimeric RGD-peptide (PEG3-c(RGDyK)2). Imaging studies with [18F]FPP (RGD)2 in normal mice and a healthy human volunteer were carried out using small animal and clinical PET scanners, respectively. Results Through optimization of each radiosynthetic step, [18F]FPP(RGD)2 was obtained with RCYs of 16.9±2.7% (n=8, EOB) and specific radioactivity of 114±72 GBq/μmol (3.08±1.95 Ci/μmol; n=8, EOB) after 170 min of radiosynthesis. In our mouse studies, high radioactivity uptake was only observed in the kidneys and bladder with the clinical-grade tracer. Favorable [18F]FPP (RGD)2 biodistribution in human studies, with low background signal in the head, neck, and thorax, showed the potential applications of this RGD-peptide tracer for detecting and monitoring tumor growth and metastasis. Conclusions A reliable, routine, and automated radiosynthesis of clinical-grade [18F]FPP(RGD)2 was established. PET imaging in a healthy human volunteer illustrates that [18F]FPP(RGD)2 possesses desirable pharmacokinetic properties for clinical noninvasive imaging of αvβ3 expression. Further imaging studies using [18F]FPP(RGD)2 in patient volunteers are now under active investigation. PMID:21400112

  14. First experience with clinical-grade ([18F]FPP(RGD₂): an automated multi-step radiosynthesis for clinical PET studies.

    PubMed

    Chin, Frederick T; Shen, Bin; Liu, Shuanglong; Berganos, Rhona A; Chang, Edwin; Mittra, Erik; Chen, Xiaoyuan; Gambhir, Sanjiv S

    2012-02-01

    A reliable and routine process to introduce a new ¹⁸F-labeled dimeric RGD-peptide tracer ([¹⁸F]FPP(RGD₂) for noninvasive imaging of α(v)β₃ expression in tumors needed to be developed so the tracer could be evaluated for the first time in man. Clinical-grade [¹⁸F]FPP(RGD)₂ was screened in mouse prior to our first pilot study in human. [¹⁸F]FPP(RGD)₂ was synthesized by coupling 4-nitrophenyl-2-[¹⁸F]fluoropropionate ([¹⁸F]NPE) with the dimeric RGD-peptide (PEG₃-c(RGDyK)₂). Imaging studies with [¹⁸F]FPP(RGD)₂ in normal mice and a healthy human volunteer were carried out using small animal and clinical PET scanners, respectively. Through optimization of each radiosynthetic step, [¹⁸F]FPP(RGD)₂ was obtained with RCYs of 16.9 ± 2.7% (n = 8, EOB) and specific radioactivity of 114 ± 72 GBq/μmol (3.08 ± 1.95 Ci/μmol; n = 8, EOB) after 170 min of radiosynthesis. In our mouse studies, high radioactivity uptake was only observed in the kidneys and bladder with the clinical-grade tracer. Favorable [¹⁸F]FPP(RGD)₂ biodistribution in human studies, with low background signal in the head, neck, and thorax, showed the potential applications of this RGD-peptide tracer for detecting and monitoring tumor growth and metastasis. A reliable, routine, and automated radiosynthesis of clinical-grade [¹⁸F]FPP(RGD)₂ was established. PET imaging in a healthy human volunteer illustrates that [¹⁸F]FPP(RGD)₂ possesses desirable pharmacokinetic properties for clinical noninvasive imaging of α(v)β₃ expression. Further imaging studies using [¹⁸F]FPP(RGD)₂ in patient volunteers are now under active investigation.

  15. Optimization of Hydroacoustic Equipment Deployments at Lookout Point and Cougar Dams, Willamette Valley Project, 2010

    SciTech Connect

    Johnson, Gary E.; Khan, Fenton; Ploskey, Gene R.; Hughes, James S.; Fischer, Eric S.

    2010-08-18

    The goal of the study was to optimize performance of the fixed-location hydroacoustic systems at Lookout Point Dam (LOP) and the acoustic imaging system at Cougar Dam (CGR) by determining deployment and data acquisition methods that minimized structural, electrical, and acoustic interference. The general approach was a multi-step process from mount design to final system configuration. The optimization effort resulted in successful deployments of hydroacoustic equipment at LOP and CGR.

  16. Multi-step contrast sensitivity gauge

    DOEpatents

    Quintana, Enrico C; Thompson, Kyle R; Moore, David G; Heister, Jack D; Poland, Richard W; Ellegood, John P; Hodges, George K; Prindville, James E

    2014-10-14

    An X-ray contrast sensitivity gauge is described herein. The contrast sensitivity gauge comprises a plurality of steps of varying thicknesses. Each step in the gauge includes a plurality of recesses of differing depths, wherein the depths are a function of the thickness of their respective step. An X-ray image of the gauge is analyzed to determine a contrast-to-noise ratio of a detector employed to generate the image.

  17. LC-MS analysis of polyclonal human anti-Neu5Gc Xeno-autoantibodies IgG subclass and partial sequence using multi-step IVIG affinity purification and multi-enzymatic digestion

    PubMed Central

    Lu, Qiaozhen; Padler-Karavani, Vered; Yu, Hai; Chen, Xi; Wu, Shiaw-Lin; Varki, Ajit; Hancock, William S.

    2014-01-01

    Human polyclonal IgG antibodies directly against the non-human sialic acid N-glycolylneuraminic acid (Neu5Gc) are potential biomarkers and mechanistic contributors to cancer and other diseases associated with chronic inflammation. Using a sialoglycan microarray, we screened the binding pattern of such antibodies (anti-Neu5Gc IgG) in several samples of clinically-approved human IVIG (IgG). These results were used to select an appropriate sample for a multi-step affinity purification of the xeno-autoantibody fraction. The sample was then analyzed via our multi-enzyme digestion procedure followed by nanoLC coupled to LTQ-FTMS. We used characteristic and unique peptide sequences to determine the IgG subclass distribution and thus provided direct evidence that all four IgG subclasses can be generated during a xeno-autoantibody immune response to carbohydrate Neu5Gc-antigens. Furthermore, we obtained a significant amount of sequence coverage of both the constant and variable regions. The approach described here, therefore, provides a way to characterize these clinically significant antibodies, helping to understand their origins and significance. PMID:22390546

  18. Multi-step cure kinetic model of ultra-thin glass fiber epoxy prepreg exhibiting both autocatalytic and diffusion-controlled regimes under isothermal and dynamic-heating conditions

    NASA Astrophysics Data System (ADS)

    Kim, Ye Chan; Min, Hyunsung; Hong, Sungyong; Wang, Mei; Sun, Hanna; Park, In-Kyung; Choi, Hyouk Ryeol; Koo, Ja Choon; Moon, Hyungpil; Kim, Kwang J.; Suhr, Jonghwan; Nam, Jae-Do

    2017-08-01

    As packaging technologies are demanded that reduce the assembly area of substrate, thin composite laminate substrates require the utmost high performance in such material properties as the coefficient of thermal expansion (CTE), and stiffness. Accordingly, thermosetting resin systems, which consist of multiple fillers, monomers and/or catalysts in thermoset-based glass fiber prepregs, are extremely complicated and closely associated with rheological properties, which depend on the temperature cycles for cure. For the process control of these complex systems, it is usually required to obtain a reliable kinetic model that could be used for the complex thermal cycles, which usually includes both the isothermal and dynamic-heating segments. In this study, an ultra-thin prepreg with highly loaded silica beads and glass fibers in the epoxy/amine resin system was investigated as a model system by isothermal/dynamic heating experiments. The maximum degree of cure was obtained as a function of temperature. The curing kinetics of the model prepreg system exhibited a multi-step reaction and a limited conversion as a function of isothermal curing temperatures, which are often observed in epoxy cure system because of the rate-determining diffusion of polymer chain growth. The modified kinetic equation accurately described the isothermal behavior and the beginning of the dynamic-heating behavior by integrating the obtained maximum degree of cure into the kinetic model development.

  19. Priority areas for anuran conservation using biogeographical data: a comparison of greedy, rarity, and simulated annealing algorithms to define reserve networks in cerrado.

    PubMed

    Diniz-Filho, J A F; Bini, L M; Bastos, R P; Vieira, C M; Vieira, L C G

    2005-05-01

    Spatial patterns in biodiversity variation at a regional scale are rarely taken into account when a natural reserve is to be established, despite many available methods for determining them. In this paper, we used dimensions of occurrence of 105 species of Anura (Amphibia) in the cerrado region of central Brazil to create a regional system of potential areas that preserves all regional diversity, using three different algorithms to establish reserve networks: "greedy", rarity, and simulated annealing algorithms. These generated networks based on complementarity with 10, 12, and 8 regions, respectively, widely distributed in the biome, and encompassing various Brazilian states. Although the purpose of these algorithms is to find a small number of regions for which all species are represented at least once, the results showed that 67.6%, 76.2%, and 69.5% of the species were represented in two or more regions in the three networks. Simulated annealing produced the smallest network, but it left out three species (one endemic). On the other hand, while the greedy algorithm produce a smaller solution, the rarity-based algorithm ensured that more species were represented more than once, which can be advantageous because it takes into consideration the high levels of habitat loss in the cerrado. Although usually coarse, these macro-scale approaches can provide overall guidelines for conservation and are useful in determining the focus for more local and effective conservation efforts, which is especially important when dealing with a taxonomic group such as anurans, for which quick and drastic population declines have been reported throughout the world.

  20. Optimal interdiction of unreactive Markovian evaders

    SciTech Connect

    Hagberg, Aric; Pan, Feng; Gutfraind, Alex

    2009-01-01

    The interdiction problem arises in a variety of areas including military logistics, infectious disease control, and counter-terrorism. In the typical formulation of network interdiction. the task of the interdictor is to find a set of edges in a weighted network such that the removal of those edges would increase the cost to an evader of traveling on a path through the network. Our work is motivated by cases in which the evader has incomplete information about the network or lacks planning time or computational power, e.g. when authorities set up roadblocks to catch bank robbers, the criminals do not know all the roadblock locations or the best path to use for their escape. We introduce a model of network interdiction in which the motion of one or more evaders is described by Markov processes on a network and the evaders are assumed not to react to interdiction decisions. The interdiction objective is to find a node or set. of size at most B, that maximizes the probability of capturing the evaders. We prove that similar to the classical formulation this interdiction problem is NP-hard. But unlike the classical problem our interdiction problem is submodular and the optimal solution can be approximated within 1-lie using a greedy algorithm. Additionally. we exploit submodularity to introduce a priority evaluation strategy that speeds up the greedy algorithm by orders of magnitude. Taken together the results bring closer the goal of finding realistic solutions to the interdiction problem on global-scale networks.

  1. Ecological studies on the greedy scale, Hemiberlisia rapax (Comstock) (Homoptera: Diaspididae) on pear trees in burg El-Arab area, Alexandria, Egypt.

    PubMed

    Mesbah, H A; Moursi Khadiga, S; Mourad, A K; Abdel-Razak Soad, I

    2008-01-01

    The greedy scale, Hemiberlisia rapax (Comstock) causes economic damage on pear trees under irrigation system in Burg El-Arab area (50 Km. West of Alexandria). The infestation rate of H. rapax reached its first maximum rate during August to October, and the second one occurred from January to March. The 1st highest peak of insect population occurred during September and October; the second was during January and February, and the third one corresponded to April for the 1st and the 2nd successive seasons. The statistical analysis was performed to determine the relationship among the weather factors of mean daily temperature, daily relative humidity, wind speed, and dew point in relation to the population activity of Hemiberlisia rapax. The immature stages had two peaks of fluctuation during October to November and July to August. The adult females reached their maximum rates during winter and spring months. Adult males appeared in late March in few numbers. The insect was parasitized by Aphytis diaspidis (Howard) (Hymenoptera: Aphelinidae) in maximum numbers in June and July. This parasitoid had three overlapping generations all the year round. The first in September-October; the second extended from March to May; while the third one lasted from July to September.

  2. Ant system: optimization by a colony of cooperating agents.

    PubMed

    Dorigo, M; Maniezzo, V; Colorni, A

    1996-01-01

    An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.

  3. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  4. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  5. Image-driven mesh optimization

    SciTech Connect

    Lindstrom, P; Turk, G

    2001-01-05

    We describe a method of improving the appearance of a low vertex count mesh in a manner that is guided by rendered images of the original, detailed mesh. This approach is motivated by the fact that greedy simplification methods often yield meshes that are poorer than what can be represented with a given number of vertices. Our approach relies on edge swaps and vertex teleports to alter the mesh connectivity, and uses the downhill simplex method to simultaneously improve vertex positions and surface attributes. Note that this is not a simplification method--the vertex count remains the same throughout the optimization. At all stages of the optimization the changes are guided by a metric that measures the differences between rendered versions of the original model and the low vertex count mesh. This method creates meshes that are geometrically faithful to the original model. Moreover, the method takes into account more subtle aspects of a model such as surface shading or whether cracks are visible between two interpenetrating parts of the model.

  6. An energy-based perturbation and a taboo strategy for improving the searching ability of stochastic structural optimization methods

    NASA Astrophysics Data System (ADS)

    Cheng, Longjiu; Cai, Wensheng; Shao, Xueguang

    2005-03-01

    An energy-based perturbation and a new idea of taboo strategy are proposed for structural optimization and applied in a benchmark problem, i.e., the optimization of Lennard-Jones (LJ) clusters. It is proved that the energy-based perturbation is much better than the traditional random perturbation both in convergence speed and searching ability when it is combined with a simple greedy method. By tabooing the most wide-spread funnel instead of the visited solutions, the hit rate of other funnels can be significantly improved. Global minima of (LJ) clusters up to 200 atoms are found with high efficiency.

  7. Hybrid Self-Adaptive Evolution Strategies Guided by Neighborhood Structures for Combinatorial Optimization Problems.

    PubMed

    Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G

    2016-01-01

    This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.

  8. Occluded object imaging via optimal camera selection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Tong, Xiaomin; Ma, Wenguang; Yu, Rui

    2013-12-01

    High performance occluded object imaging in cluttered scenes is a significant challenging task for many computer vision applications. Recently the camera array synthetic aperture imaging is proved to be an effective way to seeing object through occlusion. However, the imaging quality of occluded object is often significantly decreased by the shadows of the foreground occluder. Although some works have been presented to label the foreground occluder via object segmentation or 3D reconstruction, these methods will fail in the case of complicated occluder and severe occlusion. In this paper, we present a novel optimal camera selection algorithm to solve the above problem. The main characteristics of this algorithm include: (1) Instead of synthetic aperture imaging, we formulate the occluded object imaging problem as an optimal camera selection and mosaicking problem. To the best of our knowledge, our proposed method is the first one for occluded object mosaicing. (2) A greedy optimization framework is presented to propagate the visibility information among various depth focus planes. (3) A multiple label energy minimization formulation is designed in each plane to select the optimal camera. The energy is estimated in the synthetic aperture image volume and integrates the multi-view intensity consistency, previous visibility property and camera view smoothness, which is minimized via Graph cuts. We compare our method with the state-of-the-art synthetic aperture imaging algorithms, and extensive experimental results with qualitative and quantitative analysis demonstrate the effectiveness and superiority of our approach.

  9. Nonconvex compressed sensing by nature-inspired optimization algorithms.

    PubMed

    Liu, Fang; Lin, Leping; Jiao, Licheng; Li, Lingling; Yang, Shuyuan; Hou, Biao; Ma, Hongmei; Yang, Li; Xu, Jinghuan

    2015-05-01

    The l 0 regularized problem in compressed sensing reconstruction is nonconvex with NP-hard computational complexity. Methods available for such problems fall into one of two types: greedy pursuit methods and thresholding methods, which are characterized by suboptimal fast search strategies. Nature-inspired algorithms for combinatorial optimization are famous for their efficient global search strategies and superior performance for nonconvex and nonlinear problems. In this paper, we study and propose nonconvex compressed sensing for natural images by nature-inspired optimization algorithms. We get measurements by the block-based compressed sampling and introduce an overcomplete dictionary of Ridgelet for image blocks. An atom of this dictionary is identified by the parameters of direction, scale and shift. Of them, direction parameter is important for adapting to directional regularity. So we propose a two-stage reconstruction scheme (TS_RS) of nature-inspired optimization algorithms. In the first reconstruction stage, we design a genetic algorithm for a class of image blocks to acquire the estimation of atomic combinations in all directions; and in the second reconstruction stage, we adopt clonal selection algorithm to search better atomic combinations in the sub-dictionary resulted by the first stage for each image block further on scale and shift parameters. In TS_RS, to reduce the uncertainty and instability of the reconstruction problems, we adopt novel and flexible heuristic searching strategies, which include delicately designing the initialization, operators, evaluating methods, and so on. The experimental results show the efficiency and stability of the proposed TS_RS of nature-inspired algorithms, which outperforms classic greedy and thresholding methods.

  10. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  11. Optimal transport on supply-demand networks.

    PubMed

    Chen, Yu-Han; Wang, Bing-Hong; Zhao, Li-Chao; Zhou, Changsong; Zhou, Tao

    2010-06-01

    In the literature, transport networks are usually treated as homogeneous networks, that is, every node has the same function, simultaneously providing and requiring resources. However, some real networks, such as power grids and supply chain networks, show a far different scenario in which nodes are classified into two categories: supply nodes provide some kinds of services, while demand nodes require them. In this paper, we propose a general transport model for these supply-demand networks, associated with a criterion to quantify their transport capacities. In a supply-demand network with heterogeneous degree distribution, its transport capacity strongly depends on the locations of supply nodes. We therefore design a simulated annealing algorithm to find the near optimal configuration of supply nodes, which remarkably enhances the transport capacity compared with a random configuration and outperforms the degree target algorithm, the betweenness target algorithm, and the greedy method. This work provides a start point for systematically analyzing and optimizing transport dynamics on supply-demand networks.

  12. Offshore wind farm layout optimization

    NASA Astrophysics Data System (ADS)

    Elkinton, Christopher Neil

    Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of

  13. Improving IMRT-plan quality with MLC leaf position refinement post plan optimization.

    PubMed

    Niu, Ying; Zhang, Guowei; Berman, Barry L; Parke, William C; Yi, Byongyong; Yu, Cedric X

    2012-08-01

    In intensity-modulated radiation therapy (IMRT) planning, reducing the pencil-beam size may lead to a significant improvement in dose conformity, but also increase the time needed for the dose calculation and plan optimization. The authors develop and evaluate a postoptimization refinement (POpR) method, which makes fine adjustments to the multileaf collimator (MLC) leaf positions after plan optimization, enhancing the spatial precision and improving the plan quality without a significant impact on the computational burden. The authors' POpR method is implemented using a commercial treatment planning system based on direct aperture optimization. After an IMRT plan is optimized using pencil beams with regular pencil-beam step size, a greedy search is conducted by looping through all of the involved MLC leaves to see if moving the MLC leaf in or out by half of a pencil-beam step size will improve the objective function value. The half-sized pencil beams, which are used for updating dose distribution in the greedy search, are derived from the existing full-sized pencil beams without need for further pencil-beam dose calculations. A benchmark phantom case and a head-and-neck (HN) case are studied for testing the authors' POpR method. Using a benchmark phantom and a HN case, the authors have verified that their POpR method can be an efficient technique in the IMRT planning process. Effectiveness of POpR is confirmed by noting significant improvements in objective function values. Dosimetric benefits of POpR are comparable to those of using a finer pencil-beam size from the optimization start, but with far less computation and time. The POpR is a feasible and practical method to significantly improve IMRT-plan quality without compromising the planning efficiency.

  14. Improving IMRT-plan quality with MLC leaf position refinement post plan optimization

    PubMed Central

    Niu, Ying; Zhang, Guowei; Berman, Barry L.; Parke, William C.; Yi, Byongyong; Yu, Cedric X.

    2012-01-01

    Purpose: In intensity-modulated radiation therapy (IMRT) planning, reducing the pencil-beam size may lead to a significant improvement in dose conformity, but also increase the time needed for the dose calculation and plan optimization. The authors develop and evaluate a postoptimization refinement (POpR) method, which makes fine adjustments to the multileaf collimator (MLC) leaf positions after plan optimization, enhancing the spatial precision and improving the plan quality without a significant impact on the computational burden. Methods: The authors’ POpR method is implemented using a commercial treatment planning system based on direct aperture optimization. After an IMRT plan is optimized using pencil beams with regular pencil-beam step size, a greedy search is conducted by looping through all of the involved MLC leaves to see if moving the MLC leaf in or out by half of a pencil-beam step size will improve the objective function value. The half-sized pencil beams, which are used for updating dose distribution in the greedy search, are derived from the existing full-sized pencil beams without need for further pencil-beam dose calculations. A benchmark phantom case and a head-and-neck (HN) case are studied for testing the authors’ POpR method. Results: Using a benchmark phantom and a HN case, the authors have verified that their POpR method can be an efficient technique in the IMRT planning process. Effectiveness of POpR is confirmed by noting significant improvements in objective function values. Dosimetric benefits of POpR are comparable to those of using a finer pencil-beam size from the optimization start, but with far less computation and time. Conclusions: The POpR is a feasible and practical method to significantly improve IMRT-plan quality without compromising the planning efficiency. PMID:22894437

  15. Improving IMRT-plan quality with MLC leaf position refinement post plan optimization

    SciTech Connect

    Niu Ying; Zhang Guowei; Berman, Barry L.; Parke, William C.; Yi Byongyong; Yu, Cedric X.

    2012-08-15

    Purpose: In intensity-modulated radiation therapy (IMRT) planning, reducing the pencil-beam size may lead to a significant improvement in dose conformity, but also increase the time needed for the dose calculation and plan optimization. The authors develop and evaluate a postoptimization refinement (POpR) method, which makes fine adjustments to the multileaf collimator (MLC) leaf positions after plan optimization, enhancing the spatial precision and improving the plan quality without a significant impact on the computational burden. Methods: The authors' POpR method is implemented using a commercial treatment planning system based on direct aperture optimization. After an IMRT plan is optimized using pencil beams with regular pencil-beam step size, a greedy search is conducted by looping through all of the involved MLC leaves to see if moving the MLC leaf in or out by half of a pencil-beam step size will improve the objective function value. The half-sized pencil beams, which are used for updating dose distribution in the greedy search, are derived from the existing full-sized pencil beams without need for further pencil-beam dose calculations. A benchmark phantom case and a head-and-neck (HN) case are studied for testing the authors' POpR method. Results: Using a benchmark phantom and a HN case, the authors have verified that their POpR method can be an efficient technique in the IMRT planning process. Effectiveness of POpR is confirmed by noting significant improvements in objective function values. Dosimetric benefits of POpR are comparable to those of using a finer pencil-beam size from the optimization start, but with far less computation and time. Conclusions: The POpR is a feasible and practical method to significantly improve IMRT-plan quality without compromising the planning efficiency.

  16. An Automated, Multi-Step Monte Carlo Burnup Code System.

    SciTech Connect

    TRELLUE, HOLLY R.

    2003-07-14

    Version 02 MONTEBURNS Version 2 calculates coupled neutronic/isotopic results for nuclear systems and produces a large number of criticality and burnup results based on various material feed/removal specifications, power(s), and time intervals. MONTEBURNS is a fully automated tool that links the LANL MCNP Monte Carlo transport code with a radioactive decay and burnup code. Highlights on changes to Version 2 are listed in the transmittal letter. Along with other minor improvements in MONTEBURNS Version 2, the option was added to use CINDER90 instead of ORIGEN2 as the depletion/decay part of the system. CINDER90 is a multi-group depletion code developed at LANL and is not currently available from RSICC. This MONTEBURNS release was tested with various combinations of CCC-715/MCNPX 2.4.0, CCC-710/MCNP5, CCC-700/MCNP4C, CCC-371/ORIGEN2.2, ORIGEN2.1 and CINDER90. Perl is required software and is not included in this distribution. MCNP, ORIGEN2, and CINDER90 are not included.

  17. Multi-step heater deployment in a subsurface formation

    DOEpatents

    Mason, Stanley Leroy [Allen, TX

    2012-04-03

    A method for installing a horizontal or inclined subsurface heater includes placing a heating section of a heater in a horizontal or inclined section of a wellbore with an installation tool. The tool is uncoupled from the heating section. A lead in section is mechanically and electrically coupled to the heating section of the heater. The lead-in section is located in an angled or vertical section of the wellbore.

  18. Detonation Diffraction in a Multi-Step Channel

    DTIC Science & Technology

    2010-12-01

    B. RANKINE HUGONIOT GAS DYANAMIC RELATIONS ...................... 5 C. ZEL’DOVICH–VON NEUMANN–DORING ( ZND ) ONE DIMENSIONAL WAVE STRUCTURE...the detonation. Some basic detonation models include Chapman–Jouguet and the ZND models. Main differences are depicted in Table 1. Table 1...11 Figure 7. Chapman–Jouguet tangency solutions (From [7]) C. ZEL’DOVICH–VON NEUMANN–DORING ( ZND ) ONE DIMENSIONAL WAVE STRUCTURE A simple

  19. Information processing in multi-step signaling pathways

    NASA Astrophysics Data System (ADS)

    Ganesan, Ambhi; Hamidzadeh, Archer; Zhang, Jin; Levchenko, Andre

    Information processing in complex signaling networks is limited by a high degree of variability in the abundance and activity of biochemical reactions (biological noise) operating in living cells. In this context, it is particularly surprising that many signaling pathways found in eukaryotic cells are composed of long chains of biochemical reactions, which are expected to be subject to accumulating noise and delayed signal processing. Here, we challenge the notion that signaling pathways are insulated chains, and rather view them as parts of extensively branched networks, which can benefit from a low degree of interference between signaling components. We further establish conditions under which this pathway organization would limit noise accumulation, and provide evidence for this type of signal processing in an experimental model of a calcium-activated MAPK cascade. These results address the long-standing problem of diverse organization and structure of signaling networks in live cells.

  20. Multi-Step Production of a Diphoton Resonance

    SciTech Connect

    Dobrescu, Bogdan A.; Fox, Patrick J.; Kearney, John

    2016-05-27

    Assuming that the mass peak at 750 GeV reported by the ATLAS and CMS Collaborations is due to a spin-0 particle that decays into two photons, we present two weakly-coupled renormalizable models that lead to different production mechanisms. In one model, a scalar particle produced through gluon fusion decays into the diphoton particle and a light, long-lived pseudoscalar. In another model, a $Z'$ boson produced from the annihilation of a strange-antistrange quark pair undergoes a cascade decay that leads to the diphoton particle and two sterile neutrinos. We show that various kinematic distributions may differentiate these models from the canonical model where the diphoton particle is directly produced in gluon fusion.

  1. Multi-step soil washing to remove contaminants from soil

    SciTech Connect

    Skriba, M.C.

    1993-12-31

    The advantage of the soil washing approach to remove contaminants from soils is discussed. This report also describes 2 cases in which uranium and plutonium are dispersed in soils. Removal efficiencies are described.

  2. A variable multi-step method for transient heat conduction

    NASA Technical Reports Server (NTRS)

    Smolinski, Patrick

    1991-01-01

    A variable explicit time integration algorithm is developed for unsteady diffusion problems. The algorithm uses nodal partitioning and allows the nodal groups to be updated with different time steps. The stability of the algorithm is analyzed using energy methods and critical time steps are found in terms of element eigenvalues with no restrictions on element types. Several numerical examples are given to illustrate the accuracy of the method.

  3. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  4. A global optimization paradigm based on change of measures

    PubMed Central

    Sarkar, Saikat; Roy, Debasish; Vasu, Ram Mohan

    2015-01-01

    A global optimization framework, COMBEO (Change Of Measure Based Evolutionary Optimization), is proposed. An important aspect in the development is a set of derivative-free additive directional terms, obtainable through a change of measures en route to the imposition of any stipulated conditions aimed at driving the realized design variables (particles) to the global optimum. The generalized setting offered by the new approach also enables several basic ideas, used with other global search methods such as the particle swarm or the differential evolution, to be rationally incorporated in the proposed set-up via a change of measures. The global search may be further aided by imparting to the directional update terms additional layers of random perturbations such as ‘scrambling’ and ‘selection’. Depending on the precise choice of the optimality conditions and the extent of random perturbation, the search can be readily rendered either greedy or more exploratory. As numerically demonstrated, the new proposal appears to provide for a more rational, more accurate and, in some cases, a faster alternative to many available evolutionary optimization schemes. PMID:26587268

  5. Optimizing spread dynamics on graphs by message passing

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.

    2013-09-01

    Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).

  6. Feature selection for optimized skin tumor recognition using genetic algorithms.

    PubMed

    Handels, H; Ross, T; Kreusch, J; Wolff, H H; Pöppl, S J

    1999-07-01

    In this paper, a new approach to computer supported diagnosis of skin tumors in dermatology is presented. High resolution skin surface profiles are analyzed to recognize malignant melanomas and nevocytic nevi (moles), automatically. In the first step, several types of features are extracted by 2D image analysis methods characterizing the structure of skin surface profiles: texture features based on cooccurrence matrices, Fourier features and fractal features. Then, feature selection algorithms are applied to determine suitable feature subsets for the recognition process. Feature selection is described as an optimization problem and several approaches including heuristic strategies, greedy and genetic algorithms are compared. As quality measure for feature subsets, the classification rate of the nearest neighbor classifier computed with the leaving-one-out method is used. Genetic algorithms show the best results. Finally, neural networks with error back-propagation as learning paradigm are trained using the selected feature sets. Different network topologies, learning parameters and pruning algorithms are investigated to optimize the classification performance of the neural classifiers. With the optimized recognition system a classification performance of 97.7% is achieved.

  7. Optimal stimulus scheduling for active estimation of evoked brain networks

    NASA Astrophysics Data System (ADS)

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  8. Real-time inverse high-dose-rate brachytherapy planning with catheter optimization by compressed sensing-inspired optimization strategies.

    PubMed

    Guthier, C V; Aschenbrenner, K P; Müller, R; Polster, L; Cormack, R A; Hesser, J W

    2016-08-21

    This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p  <  0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures.

  9. Real-time inverse high-dose-rate brachytherapy planning with catheter optimization by compressed sensing-inspired optimization strategies

    NASA Astrophysics Data System (ADS)

    Guthier, C. V.; Aschenbrenner, K. P.; Müller, R.; Polster, L.; Cormack, R. A.; Hesser, J. W.

    2016-08-01

    This paper demonstrates that optimization strategies derived from the field of compressed sensing (CS) improve computational performance in inverse treatment planning (ITP) for high-dose-rate (HDR) brachytherapy. Following an approach applied to low-dose-rate brachytherapy, we developed a reformulation of the ITP problem with the same mathematical structure as standard CS problems. Two greedy methods, derived from hard thresholding and subspace pursuit are presented and their performance is compared to state-of-the-art ITP solvers. Applied to clinical prostate brachytherapy plans speed-up by a factor of 56-350 compared to state-of-the-art methods. Based on a Wilcoxon signed rank-test the novel method statistically significantly decreases the final objective function value (p  <  0.01). The optimization times were below one second and thus planing can be considered as real-time capable. The novel CS inspired strategy enables real-time ITP for HDR brachytherapy including catheter optimization. The generated plans are either clinically equivalent or show a better performance with respect to dosimetric measures.

  10. Ant colony optimization with selective evaluation for feature selection in character recognition

    NASA Astrophysics Data System (ADS)

    Oh, Il-Seok; Lee, Jin-Seon

    2010-01-01

    This paper analyzes the size characteristics of character recognition domain with the aim of developing a feature selection algorithm adequate for the domain. Based on the results, we further analyze the timing requirements of three popular feature selection algorithms, greedy algorithm, genetic algorithm, and ant colony optimization. For a rigorous timing analysis, we adopt the concept of atomic operation. We propose a novel scheme called selective evaluation to improve convergence of ACO. The scheme cut down the computational load by excluding the evaluation of unnecessary or less promising candidate solutions. The scheme is realizable in ACO due to the valuable information, pheromone trail which helps identify those solutions. Experimental results showed that the ACO with selective evaluation was promising both in timing requirement and recognition performance.

  11. Application of central composite design for optimization of two-stage forming process using ultra-thin ferritic stainless steel

    NASA Astrophysics Data System (ADS)

    Bong, Hyuk Jong; Barlat, Frédéric; Lee, Jinwoo; Lee, Myoung-Gyu; Kim, Jong Hee

    2016-03-01

    Two-stage forming process for manufacturing micro-channels of bipolar plate as a component of a proton exchange membrane fuel cell was optimized. The sheet materials were ultra-thin ferritic stainless steel (FSS) sheets with thicknesses of 0.1 and 0.075 mm. For the successful micro-channel forming in the two-stage forming approach, three process variables during the first stage were selected: punch radius, die radius, and forming depth. In this study, the effect of the three process variables on the formability of ultra-thin FSSs was investigated by finite element (FE) simulations, experiments, and central composite design (CCD) method. The optimum forming process designed by the CCD showed good agreement with those by experiments and FE simulations. The newly adopted optimization tool, CCD, was found to be very useful for optimization of process parameters in the multi-step sheet metal forming processes.

  12. A new sparse optimization scheme for simultaneous beam angle and fluence map optimization in radiotherapy planning

    NASA Astrophysics Data System (ADS)

    Liu, Hongcheng; Dong, Peng; Xing, Lei

    2017-08-01

    {{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.

  13. Multiple Object Tracking Using K-Shortest Paths Optimization.

    PubMed

    Berclaz, Jérôme; Fleuret, François; Türetken, Engin; Fua, Pascal

    2011-09-01

    Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.

  14. Dynamic Optimization

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.

  15. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  16. Detecting community structure in complex networks using an interaction optimization process

    NASA Astrophysics Data System (ADS)

    Kim, Paul; Kim, Sangwook

    2017-01-01

    Most complex networks contain community structures. Detecting these community structures is important for understanding and controlling the networks. Most community detection methods use network topology and edge density to identify optimal communities; however, these methods have a high computational complexity and are sensitive to network forms and types. To address these problems, in this paper, we propose an algorithm that uses an interaction optimization process to detect community structures in complex networks. This algorithm efficiently searches the candidates of optimal communities by optimizing the interactions of the members within each community based on the concept of greedy optimization. During this process, each candidate is evaluated using an interaction-based community model. This model quickly and accurately measures the difference between the quantity and quality of intra- and inter-community interactions. We test our algorithm on several benchmark networks with known community structures that include diverse communities detected by other methods. Additionally, after applying our algorithm to several real-world complex networks, we compare our algorithm with other methods. We find that the structure quality and coverage results achieved by our algorithm surpass those of the other methods.

  17. Optimizing the StackSlide setup and data selection for continuous-gravitational-wave searches in realistic detector data

    NASA Astrophysics Data System (ADS)

    Shaltev, M.

    2016-02-01

    The search for continuous gravitational waves in a wide parameter space at a fixed computing cost is most efficiently done with semicoherent methods, e.g., StackSlide, due to the prohibitive computing cost of the fully coherent search strategies. Prix and Shaltev [Phys. Rev. D 85, 084010 (2012)] have developed a semianalytic method for finding optimal StackSlide parameters at a fixed computing cost under ideal data conditions, i.e., gapless data and a constant noise floor. In this work, we consider more realistic conditions by allowing for gaps in the data and changes in the noise level. We show how the sensitivity optimization can be decoupled from the data selection problem. To find optimal semicoherent search parameters, we apply a numerical optimization using as an example the semicoherent StackSlide search. We also describe three different data selection algorithms. Thus, the outcome of the numerical optimization consists of the optimal search parameters and the selected data set. We first test the numerical optimization procedure under ideal conditions and show that we can reproduce the results of the analytical method. Then we gradually relax the conditions on the data and find that a compact data selection algorithm yields higher sensitivity compared to a greedy data selection procedure.

  18. Design and coverage of high throughput genotyping arrays optimized for individuals of East Asian, African American, and Latino race/ethnicity using imputation and a novel hybrid SNP selection algorithm.

    PubMed

    Hoffmann, Thomas J; Zhan, Yiping; Kvale, Mark N; Hesselson, Stephanie E; Gollub, Jeremy; Iribarren, Carlos; Lu, Yontao; Mei, Gangwu; Purdy, Matthew M; Quesenberry, Charles; Rowell, Sarah; Shapero, Michael H; Smethurst, David; Somkin, Carol P; Van den Eeden, Stephen K; Walter, Larry; Webster, Teresa; Whitmer, Rachel A; Finn, Andrea; Schaefer, Catherine; Kwok, Pui-Yan; Risch, Neil

    2011-12-01

    Four custom Axiom genotyping arrays were designed for a genome-wide association (GWA) study of 100,000 participants from the Kaiser Permanente Research Program on Genes, Environment and Health. The array optimized for individuals of European race/ethnicity was previously described. Here we detail the development of three additional microarrays optimized for individuals of East Asian, African American, and Latino race/ethnicity. For these arrays, we decreased redundancy of high-performing SNPs to increase SNP capacity. The East Asian array was designed using greedy pairwise SNP selection. However, removing SNPs from the target set based on imputation coverage is more efficient than pairwise tagging. Therefore, we developed a novel hybrid SNP selection method for the African American and Latino arrays utilizing rounds of greedy pairwise SNP selection, followed by removal from the target set of SNPs covered by imputation. The arrays provide excellent genome-wide coverage and are valuable additions for large-scale GWA studies. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Discovery of optimal zeolites for challenging separations and chemical transformations using predictive materials modeling.

    PubMed

    Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W; Tsapatsis, Michael; Siepmann, J Ilja

    2015-01-21

    Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.

  20. Discovery of optimal zeolites for challenging separations and chemical transformations using predictive materials modeling

    NASA Astrophysics Data System (ADS)

    Bai, Peng; Jeon, Mi Young; Ren, Limin; Knight, Chris; Deem, Michael W.; Tsapatsis, Michael; Siepmann, J. Ilja

    2015-01-01

    Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure. To date, 213 framework types have been synthesized and >330,000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modelling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds.

  1. A Bayesian optimization approach for wind farm power maximization

    NASA Astrophysics Data System (ADS)

    Park, Jinkyoo; Law, Kincho H.

    2015-03-01

    The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.

  2. Selecting training inputs via greedy rank covering

    SciTech Connect

    Buchsbaum, A.L.; Santen, J.P.H. van

    1996-12-31

    We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.

  3. Tighten after Relax: Minimax-Optimal Sparse PCA in Polynomial Time

    PubMed Central

    Wang, Zhaoran; Lu, Huanran; Liu, Han

    2014-01-01

    We provide statistical and computational analysis of sparse Principal Component Analysis (PCA) in high dimensions. The sparse PCA problem is highly nonconvex in nature. Consequently, though its global solution attains the optimal statistical rate of convergence, such solution is computationally intractable to obtain. Meanwhile, although its convex relaxations are tractable to compute, they yield estimators with suboptimal statistical rates of convergence. On the other hand, existing nonconvex optimization procedures, such as greedy methods, lack statistical guarantees. In this paper, we propose a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time. The main stage employs a novel algorithm named sparse orthogonal iteration pursuit, which iteratively solves the underlying nonconvex problem. However, our analysis shows that this algorithm only has desired computational and statistical guarantees within a restricted region, namely the basin of attraction. To obtain the desired initial estimator that falls into this region, we solve a convex formulation of sparse PCA with early stopping. Under an integrated analytic framework, we simultaneously characterize the computational and statistical performance of this two-stage procedure. Computationally, our procedure converges at the rate of 1∕t within the initialization stage, and at a geometric rate within the main stage. Statistically, the final principal subspace estimator achieves the minimax-optimal statistical rate of convergence with respect to the sparsity level s*, dimension d and sample size n. Our procedure motivates a general paradigm of tackling nonconvex statistical learning problems with provable statistical guarantees. PMID:25620858

  4. General optimization technique for high-quality community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Sobolevsky, Stanislav; Campari, Riccardo; Belyi, Alexander; Ratti, Carlo

    2014-07-01

    Recent years have witnessed the development of a large body of algorithms for community detection in complex networks. Most of them are based upon the optimization of objective functions, among which modularity is the most common, though a number of alternatives have been suggested in the scientific literature. We present here an effective general search strategy for the optimization of various objective functions for community detection purposes. When applied to modularity, on both real-world and synthetic networks, our search strategy substantially outperforms the best existing algorithms in terms of final scores of the objective function. In terms of execution time for modularity optimization this approach also outperforms most of the alternatives present in literature with the exception of fastest but usually less efficient greedy algorithms. The networks of up to 30000 nodes can be analyzed in time spans ranging from minutes to a few hours on average workstations, making our approach readily applicable to tasks not limited by strict time constraints but requiring the quality of partitioning to be as high as possible. Some examples are presented in order to demonstrate how this quality could be affected by even relatively small changes in the modularity score stressing the importance of optimization accuracy.

  5. Selective Optimization

    DTIC Science & Technology

    2015-07-06

    optimization solvers, they typically exhibit extremely poor performance . We develop a variety of effective model and algorithm enhancement techniques...commercial optimization solvers, they typically exhibit extremely poor performance . We develop a variety of effective model and algorithm enhancement ...class of problems, and developed strengthened formulations and algorithmic techniques which perform significantly better than standard MIP

  6. Optical network unit placement in Fiber-Wireless (FiWi) access network by Moth-Flame optimization algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Puja; Prakash, Shashi

    2017-07-01

    Hybrid wireless-optical broadband access network (WOBAN) or Fiber-Wireless (FiWi) is the integration of wireless access network and optical network. This hybrid multi-domain network adopts the advantages of wireless and optical domains and serves the demand of technology savvy users. FiWi exhibits the properties of cost effectiveness, robustness, flexibility, high capacity, reliability and is self organized. Optical Network Unit (ONU) placement problem in FiWi contributes in simplifying the network design and enhances the performance in terms of cost efficiency and increased throughput. Several individual-based algorithms, such as Simulated Annealing (SA), Tabu Search, etc. have been suggested for ONU placement, but these algorithms suffer from premature convergence (trapping in a local optima). The present research work undertakes the deployment of FiWi and proposes a novel nature-inspired heuristic paradigm called Moth-Flame optimization (MFO) algorithm for multiple optical network units' placement. MFO is a population based algorithm. Population-based algorithms are better in handling local optima avoidance. The simulation results are compared with the existing Greedy and Simulated Annealing algorithms to optimize the position of ONUs. To the best of our knowledge, MFO algorithm has been used for the first time in this domain, moreover it has been able to provide very promising and competitive results. The performance of MFO algorithm has been analyzed by varying the 'b' parameter. MFO algorithm results in faster convergence than the existing strategies of Greedy and SA and returns a lower value of overall cost function. The results exhibit the dependence of the objective function on the distribution of wireless users also.

  7. Dispositional optimism.

    PubMed

    Carver, Charles S; Scheier, Michael F

    2014-06-01

    Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Dispositional Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.

    2014-01-01

    Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971

  9. Practical optimization of Steiner trees via the cavity method

    NASA Astrophysics Data System (ADS)

    Braunstein, Alfredo; Muntoni, Anna

    2016-07-01

    The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.

  10. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  11. Selecting Observation Platforms for Optimized Anomaly Detectability under Unreliable Partial Observations

    SciTech Connect

    Wen-Chiao Lin; Humberto E. Garcia; Tae-Sic Yoo

    2011-06-01

    Diagnosers for keeping track on the occurrences of special events in the framework of unreliable partially observed discrete-event dynamical systems were developed in previous work. This paper considers observation platforms consisting of sensors that provide partial and unreliable observations and of diagnosers that analyze them. Diagnosers in observation platforms typically perform better as sensors providing the observations become more costly or increase in number. This paper proposes a methodology for finding an observation platform that achieves an optimal balance between cost and performance, while satisfying given observability requirements and constraints. Since this problem is generally computational hard in the framework considered, an observation platform optimization algorithm is utilized that uses two greedy heuristics, one myopic and another based on projected performances. These heuristics are sequentially executed in order to find best observation platforms. The developed algorithm is then applied to an observation platform optimization problem for a multi-unit-operation system. Results show that improved observation platforms can be found that may significantly reduce the observation platform cost but still yield acceptable performance for correctly inferring the occurrences of special events.

  12. A noisy self-organizing neural network with bifurcation dynamics for combinatorial optimization.

    PubMed

    Kwok, Terence; Smith, Kate A

    2004-01-01

    The self-organizing neural network (SONN) for solving general "0-1" combinatorial optimization problems (COPs) is studied in this paper, with the aim of overcoming existing limitations in convergence and solution quality. This is achieved by incorporating two main features: an efficient weight normalization process exhibiting bifurcation dynamics, and neurons with additive noise. The SONN is studied both theoretically and experimentally by using the N-queen problem as an example to demonstrate and explain the dependence of optimization performance on annealing schedules and other system parameters. An equilibrium model of the SONN with neuronal weight normalization is derived, which explains observed bands of high feasibility in the normalization parameter space in terms of bifurcation dynamics of the normalization process, and provides insights into the roles of different parameters in the optimization process. Under certain conditions, this dynamical systems view of the SONN reveals cascades of period-doubling bifurcations to chaos occurring in multidimensional space with the annealing temperature as the bifurcation parameter. A strange attractor in the two-dimensional (2-D) case is also presented. Furthermore, by adding random noise to the cost potentials of the network nodes, it is demonstrated that unwanted oscillations between symmetrical and "greedy" nodes can be sufficiently reduced, resulting in higher solution quality and feasibility.

  13. Increasing the Lifetime of Mobile WSNs via Dynamic Optimization of Sensor Node Communication Activity

    PubMed Central

    Guimarães, Dayan Adionel; Sakai, Lucas Jun; Alberti, Antonio Marcos; de Souza, Rausley Adriano Amaral

    2016-01-01

    In this paper, a simple and flexible method for increasing the lifetime of fixed or mobile wireless sensor networks is proposed. Based on past residual energy information reported by the sensor nodes, the sink node or another central node dynamically optimizes the communication activity levels of the sensor nodes to save energy without sacrificing the data throughput. The activity levels are defined to represent portions of time or time-frequency slots in a frame, during which the sensor nodes are scheduled to communicate with the sink node to report sensory measurements. Besides node mobility, it is considered that sensors’ batteries may be recharged via a wireless power transmission or equivalent energy harvesting scheme, bringing to the optimization problem an even more dynamic character. We report large increased lifetimes over the non-optimized network and comparable or even larger lifetime improvements with respect to an idealized greedy algorithm that uses both the real-time channel state and the residual energy information. PMID:27657075

  14. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    PubMed

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2016-07-20

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  15. A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2008-01-01

    An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent

  16. Prospective Optimization

    PubMed Central

    Sejnowski, Terrence J.; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J.

    2014-01-01

    Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications. PMID:25328167

  17. Prospective Optimization.

    PubMed

    Sejnowski, Terrence J; Poizner, Howard; Lynch, Gary; Gepshtein, Sergei; Greenspan, Ralph J

    2014-05-01

    Human performance approaches that of an ideal observer and optimal actor in some perceptual and motor tasks. These optimal abilities depend on the capacity of the cerebral cortex to store an immense amount of information and to flexibly make rapid decisions. However, behavior only approaches these limits after a long period of learning while the cerebral cortex interacts with the basal ganglia, an ancient part of the vertebrate brain that is responsible for learning sequences of actions directed toward achieving goals. Progress has been made in understanding the algorithms used by the brain during reinforcement learning, which is an online approximation of dynamic programming. Humans also make plans that depend on past experience by simulating different scenarios, which is called prospective optimization. The same brain structures in the cortex and basal ganglia that are active online during optimal behavior are also active offline during prospective optimization. The emergence of general principles and algorithms for goal-directed behavior has consequences for the development of autonomous devices in engineering applications.

  18. Optimal Fluoridation

    PubMed Central

    Lee, John R.

    1975-01-01

    Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041

  19. Mesh Optimization

    DTIC Science & Technology

    1994-01-01

    AD-A277 644 ARAI !: ’ Mesh Optimization Technical Report # 93-01-01 Hughes Hoppe, Tony DeRose, Tom Duchamp , John McDonald and Werner Stuetzle DTIC...SrECT3D I 94 i 31 108 Mesh Optimization Technical Report # 93-01-01 Hughes Hoppe, Tony DeRose, Tom Duchamp , John McDonald and Werner Stuetzle Department...1:1. Januairy 1991. [2] T. DeRose. 11. Hoppe, T. Duchamp . .1. McDonald. and NV. Stuetzle. Fitting of surfaces to scattered data. ,PIE, 1830:212-220

  20. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of

  1. Forging tool shape optimization using pseudo inverse approach and adaptive incremental approach

    NASA Astrophysics Data System (ADS)

    Halouani, A.; Meng, F. J.; Li, Y. M.; Labergère, C.; Abbès, B.; Lafon, P.; Guo, Y. Q.

    2013-05-01

    This paper presents a simplified finite element method called "Pseudo Inverse Approach" (PIA) for tool shape design and optimization in multi-step cold forging processes. The approach is based on the knowledge of the final part shape. Some intermediate configurations are introduced and corrected by using a free surface method to consider the deformation paths without contact treatment. A robust direct algorithm of plasticity is implemented by using the equivalent stress notion and tensile curve. Numerical tests have shown that the PIA is very fast compared to the incremental approach. The PIA is used in an optimization procedure to automatically design the shapes of the preform tools. Our objective is to find the optimal preforms which minimize the equivalent plastic strain and punch force. The preform shapes are defined by B-Spline curves. A simulated annealing algorithm is adopted for the optimization procedure. The forging results obtained by the PIA are compared to those obtained by the incremental approach to show the efficiency and accuracy of the PIA.

  2. Enhanced selectivity and search speed for method development using one-segment-per-component optimization strategies.

    PubMed

    Tyteca, Eva; Vanderlinden, Kim; Favier, Maxime; Clicq, David; Cabooter, Deirdre; Desmet, Gert

    2014-09-05

    Linear gradient programs are very frequently used in reversed phase liquid chromatography to enhance the selectivity compared to isocratic separations. Multi-linear gradient programs on the other hand are only scarcely used, despite their intrinsically larger separation power. Because the gradient-conformity of the latest generation of instruments has greatly improved, a renewed interest in more complex multi-segment gradient liquid chromatography can be expected in the future, raising the need for better performing gradient design algorithms. We explored the possibilities of a new type of multi-segment gradient optimization algorithm, the so-called "one-segment-per-group-of-components" optimization strategy. In this gradient design strategy, the slope is adjusted after the elution of each individual component of the sample, letting the retention properties of the different analytes auto-guide the course of the gradient profile. Applying this method experimentally to four randomly selected test samples, the separation time could on average be reduced with about 40% compared to the best single linear gradient. Moreover, the newly proposed approach performed equally well or better than the multi-segment optimization mode of a commercial software package. Carrying out an extensive in silico study, the experimentally observed advantage could also be generalized over a statistically significant amount of different 10 and 20 component samples. In addition, the newly proposed gradient optimization approach enables much faster searches than the traditional multi-step gradient design methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  4. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    NASA Astrophysics Data System (ADS)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  5. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture.

    PubMed

    Kreitler, Jason; Stoms, David M; Davis, Frank W

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  6. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    USGS Publications Warehouse

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  7. Multidisciplinary optimization

    SciTech Connect

    Dennis, J.; Lewis, R.M.; Cramer, E.J.; Frank, P.M.; Shubin, G.R.

    1994-12-31

    This talk will use aeroelastic design and reservoir characterization as examples to introduce some approaches to MDO, or Multidisciplinary Optimization. This problem arises especially in engineering design, where it is considered of paramount importance in today`s competitive global business climate. It is interesting to an optimizer because the constraints involve coupled dissimilar systems of parameterized partial differential equations each arising from a different discipline, like structural analysis, computational fluid dynamics, etc. Usually, these constraints are accessible only through pde solvers rather than through algebraic residual calculations as we are used to having. Thus, just finding a multidisciplinary feasible point is a daunting task. Many such problems have discrete variable disciplines, multiple objectives, and other challenging features. After discussing some interesting practical features of the design problem, we will give some standard ways to formulate the problem as well as some novel ways that lend themselves to divide-and-conquer parallelism.

  8. Numerical Optimization

    DTIC Science & Technology

    1992-12-01

    steady-state fluid flow through porous media. Some of these problems can be formulated as a variational inequality af- ter an ingenious transformation...constrai- ned optimization problems, we describe two new solution methods which re- sulted from the research. The first is a continuous "inexact...34 method for sol- ving systems of nonlinear equations and complementarity problems (along the lines of the DAFNE Method), and the second is a continuous

  9. Optimization methods for decision making in disease prevention and epidemic control.

    PubMed

    Deng, Yan; Shen, Siqian; Vorobeychik, Yevgeniy

    2013-11-01

    This paper investigates problems of disease prevention and epidemic control (DPEC), in which we optimize two sets of decisions: (i) vaccinating individuals and (ii) closing locations, given respective budgets with the goal of minimizing the expected number of infected individuals after intervention. The spread of diseases is inherently stochastic due to the uncertainty about disease transmission and human interaction. We use a bipartite graph to represent individuals' propensities of visiting a set of location, and formulate two integer nonlinear programming models to optimize choices of individuals to vaccinate and locations to close. Our first model assumes that if a location is closed, its visitors stay in a safe location and will not visit other locations. Our second model incorporates compensatory behavior by assuming multiple behavioral groups, always visiting the most preferred locations that remain open. The paper develops algorithms based on a greedy strategy, dynamic programming, and integer programming, and compares the computational efficacy and solution quality. We test problem instances derived from daily behavior patterns of 100 randomly chosen individuals (corresponding to 195 locations) in Portland, Oregon, and provide policy insights regarding the use of the two DPEC models. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Optimal space-time attacks on system state estimation under a sparsity constraint

    NASA Astrophysics Data System (ADS)

    Lu, Jingyang; Niu, Ruixin; Han, Puxiao

    2016-05-01

    System state estimation in the presence of an adversary that injects false information into sensor readings has attracted much attention in wide application areas, such as target tracking with compromised sensors, secure monitoring of dynamic electric power systems, secure driverless cars, and radar tracking and detection in the presence of jammers. From a malicious adversary's perspective, the optimal strategy for attacking a multi-sensor dynamic system over sensors and over time is investigated. It is assumed that the system defender can perfectly detect the attacks and identify and remove sensor data once they are corrupted by false information injected by the adversary. With this in mind, the adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under a sparse attack constraint such that the adversary can only attack the system a few times over time and over sensors. The sparsity assumption is due to the adversary's limited resources and his/her intention to reduce the chance of being detected by the system defender. This becomes an integer programming problem and its optimal solution, the exhaustive search, is intractable with a prohibitive complexity, especially for a system with a large number of sensors and over a large number of time steps. Several suboptimal solutions, such as those based on greedy search and dynamic programming are proposed to find the attack strategies. Examples and numerical results are provided in order to illustrate the effectiveness and the reduced computational complexities of the proposed attack strategies.

  11. Challenges and Solutions in Optimizing Execution Performance of a Clinical Decision Support-Based Quality Measurement (CDS-QM) Framework.

    PubMed

    Tippetts, Tyler J; Warner, Phillip B; Kukhareva, Polina V; Shields, David E; Staes, Catherine J; Kawamoto, Kensaku

    2015-01-01

    Given the close relationship between clinical decision support (CDS) and quality measurement (QM), it has been proposed that a standards-based CDS Web service could be leveraged to enable QM. Benefits of such a CDS-QM framework include semantic consistency and implementation efficiency. However, earlier research has identified execution performance as a critical barrier when CDS-QM is applied to large populations. Here, we describe challenges encountered and solutions devised to optimize CDS-QM execution performance. Through these optimizations, the CDS-QM execution time was optimized approximately three orders of magnitude, such that approximately 370,000 patient records can now be evaluated for 22 quality measure groups in less than 5 hours (approximately 2 milliseconds per measure group per patient). Several key optimization methods were identified, with the most impact achieved through population-based retrieval of relevant data, multi-step data staging, and parallel processing. These optimizations have enabled CDS-QM to be operationally deployed at an enterprise level.

  12. Optimal pipelining

    NASA Technical Reports Server (NTRS)

    Dubey, Pradeep K.; Flynn, Michael J.

    1990-01-01

    An effort is made to characterize the tradeoffs and overheads limiting the speedup potential theoretically projected for pipeline-incorporating computer architectures, using a mathematical model of the roles played by the various parameters. Pipeline optimization proceeds by a partitioning of the pipeline into an optimum number of segments so that maximization of throughput is obtained. Inferences are drawn from the model, and potential improvements to it are identified. Substantial agreement is obtained with Kunkel and Smith's (1986) CRAY-1S simulations of pipelining.

  13. [SIAM conference on optimization

    SciTech Connect

    Not Available

    1992-05-10

    Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.

  14. [SIAM conference on optimization

    SciTech Connect

    Not Available

    1992-05-10

    Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.

  15. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  16. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  17. Optimal refrigerator.

    PubMed

    Allahverdyan, Armen E; Hovhannisyan, Karen; Mahler, Guenter

    2010-05-01

    We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures T h and T c, respectively (θ ≡ T c/T h < 1). The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by [formula: see text] (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency [formula: see text]. The lower bound is reached in the equilibrium limit θ → 1. The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for ln n > 1. If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζ CA and converges to it for n > 1.

  18. Optimal refrigerator

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Hovhannisyan, Karen; Mahler, Guenter

    2010-05-01

    We study a refrigerator model which consists of two n -level systems interacting via a pulsed external field. Each system couples to its own thermal bath at temperatures Th and Tc , respectively (θ≡Tc/Th<1) . The refrigerator functions in two steps: thermally isolated interaction between the systems driven by the external field and isothermal relaxation back to equilibrium. There is a complementarity between the power of heat transfer from the cold bath and the efficiency: the latter nullifies when the former is maximized and vice versa. A reasonable compromise is achieved by optimizing the product of the heat-power and efficiency over the Hamiltonian of the two systems. The efficiency is then found to be bounded from below by ζCA=(1)/(1-θ)-1 (an analog of the Curzon-Ahlborn efficiency), besides being bound from above by the Carnot efficiency ζC=(1)/(1-θ)-1 . The lower bound is reached in the equilibrium limit θ→1 . The Carnot bound is reached (for a finite power and a finite amount of heat transferred per cycle) for lnn≫1 . If the above maximization is constrained by assuming homogeneous energy spectra for both systems, the efficiency is bounded from above by ζCA and converges to it for n≫1 .

  19. Optimization and scale-up of a fluid bed tangential spray rotogranulation process.

    PubMed

    Bouffard, J; Dumont, H; Bertrand, F; Legros, R

    2007-04-20

    The production of pellets in the pharmaceutical industry generally involves multi-step processing: (1) mixing, (2) wet granulation, (3) spheronization and (4) drying. While extrusion-spheronization processes have been popular because of their simplicity, fluid-bed rotogranulation (FBRG) is now being considered as an alternative, since it offers the advantages of combining the different steps into one processing unit, thus reducing processing time and material handling. This work aimed at the development of a FBRG process for the production of pellets in a 4.5-l Glatt GCPG1 tangential spray rotoprocessor and its optimization using factorial design. The factors considered were: (1) rotor disc velocity, (2) gap air pressure, (3) air flow rate, (4) binder spray rate and (5) atomization pressure. The pellets were characterized for their physical properties by measuring size distribution, roundness and flow properties. The results indicated that: pellet mean particle size is negatively affected by air flow rate and rotor plate speed, while binder spray rate has a positive effect on size; pellet flow properties are enhanced by operating with increased air flow rate and worsened with increased binder spray rate. Multiple regression analysis enabled the identification of an optimal operating window for production of acceptable pellets. Scale-up of these operating conditions was tested in a 30-l Glatt GPCG15 FBRG.

  20. Optimization of propranolol HCl release kinetics from press coated sustained release tablets.

    PubMed

    Ali, Adel Ahmed; Ali, Ahmed Mahmoud

    2013-01-01

    Press-coated sustained release tablets offer a valuable, cheap and easy manufacture alternative to the highly expensive, multi-step manufacture and filling of coated beads. In this study, propranolol HCl press-coated tablets were prepared using hydroxylpropylmethylcellulose (HPMC) as tablet coating material together with carbopol 971P and compressol as release modifiers. The prepared formulations were optimized for zero-order release using artificial neural network program (INForm, Intelligensys Ltd, North Yorkshire, UK). Typical zero-order release kinetics with extended release profile for more than 12 h was obtained. The most important variables considered by the program in optimizing formulations were type and proportion of polymer mixture in the coat layer and distribution ratio of drug between core and coat. The key elements found were; incorporation of 31-38 % of the drug in the coat, fixing the amount of polymer in coat to be not less than 50 % of coat layer. Optimum zero-order release kinetics (linear regression r2 = 0.997 and Peppas model n value > 0.80) were obtained when 2.5-10 % carbopol and 25-42.5% compressol were incorporated into the 50 % HPMC coat layer.

  1. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  2. Optimal design of measurement settings for quantum-state-tomography experiments

    NASA Astrophysics Data System (ADS)

    Li, Jun; Huang, Shilin; Luo, Zhihuang; Li, Keren; Lu, Dawei; Zeng, Bei

    2017-09-01

    Quantum state tomography is an indispensable but costly part of many quantum experiments. Typically, it requires measurements to be carried out in a number of different settings on a fixed experimental setup. The collected data are often informationally overcomplete, with the amount of information redundancy depending on the particular set of measurement settings chosen. This raises a question about how one should optimally take data so that the number of measurement settings necessary can be reduced. Here, we cast this problem in terms of integer programming. For a given experimental setup, standard integer-programming algorithms allow us to find the minimum set of readout operations that can realize a target tomographic task. We apply the method to certain basic and practical state-tomographic problems in nuclear-magnetic-resonance experimental systems. The results show that considerably fewer readout operations can be found using our technique than by using the previous greedy search strategy. Therefore, our method could be helpful for simplifying measurement schemes to minimize the experimental effort.

  3. Temporal variability of the optimal monitoring setup assessed using information theory

    NASA Astrophysics Data System (ADS)

    Fahle, Marcus; Hohenbrink, Tobias L.; Dietrich, Ottfried; Lischeid, Gunnar

    2015-09-01

    Hydrology is rich in methods that use information theory to evaluate monitoring networks. Yet in most existing studies, only the available data set as a whole is used, which neglects the intraannual variability of the hydrological system. In this paper, we demonstrate how this variability can be considered by extending monitoring evaluation to subsets of the available data. Therefore, we separately evaluated time windows of fixed length, which were shifted through the data set, and successively extended time windows. We used basic information theory measures and a greedy ranking algorithm based on the criterion of maximum information/minimum redundancy. The network investigated monitored surface and groundwater levels at quarter-hourly intervals and was located at an artificially drained lowland site in the Spreewald region in north-east Germany. The results revealed that some of the monitoring stations were of value permanently while others were needed only temporally. The prevailing meteorological conditions, particularly the amount of precipitation, affected the degree of similarity between the water levels measured. The hydrological system tended to act more individually during periods of no or little rainfall. The optimal monitoring setup, its stability, and the monitoring effort necessary were influenced by the meteorological forcing. Altogether, the methodology presented can help achieve a monitoring network design that has a more even performance or covers the conditions of interest (e.g., floods or droughts) best.

  4. Multiple Satellite Trajectory Optimization

    DTIC Science & Technology

    2004-12-01

    SOLVING OPTIMAL CONTROL PROBLEMS ........................................5...OPTIMIZATION A. SOLVING OPTIMAL CONTROL PROBLEMS The driving principle used to solve optimal control problems was first formalized by the Soviet...methods and processes of solving optimal control problems , this section will demonstrate how the formulations work as expected. Once coded, the

  5. Deterministic Direct Aperture Optimization Using Multiphase Piecewise Constant Segmentation

    NASA Astrophysics Data System (ADS)

    Nguyen, Dan Minh

    Purpose: Direct Aperture Optimization (DAO) attempts to incorporate machine constraints in the inverse optimization to eliminate the post-processing steps in fluence map optimization (FMO) that degrade plan quality. Current commercial DAO methods utilize a stochastic or greedy approach to search a small aperture solution space. In this study, we propose a novel deterministic direct aperture optimization that integrates the segmentation of fluence map in the optimization problem using the multiphase piecewise constant Mumford-Shah formulation. Methods: The deterministic DAO problem was formulated to include an L2-norm dose fidelity term to penalize differences between the projected dose and the prescribed dose, an anisotropic total variation term to promote piecewise continuity in the fluence maps, and the multiphase piecewise constant Mumford-Shah function to partition the fluence into pairwise discrete segments. A proximal-class, first-order primal-dual solver was implemented to solve the large scale optimization problem, and an alternating module strategy was implemented to update fluence and delivery segments. Three patients of varying complexity-one glioblastoma multiforme (GBM) patient, one lung (LNG) patient, and one bilateral head and neck (H&N) patient with 3 PTVs-were selected to test the new DAO method. For comparison, a popular and successful approach to DAO known as simulated annealing-a stochastic approach-was replicated. Each patient was planned using the Mumford-Shah based DAO (DAOMS) and the simulated annealing based DAO (DAOSA). PTV coverage, PTV homogeneity (D95/D5), and OAR sparing were assessed for each plan. In addition, high dose spillage, defined as the 50% isodose volume divided by the tumor volume, as well as conformity, defined as the van't Riet conformation number, were evaluated. Results: DAOMS achieved essentially the same OAR doses compared with the DAOSA plans for the GBM case. The average difference of OAR Dmax and Dmean between the

  6. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  7. SOPRA: Scaffolding algorithm for paired reads via statistical optimization

    PubMed Central

    2010-01-01

    Background High throughput sequencing (HTS) platforms produce gigabases of short read (<100 bp) data per run. While these short reads are adequate for resequencing applications, de novo assembly of moderate size genomes from such reads remains a significant challenge. These limitations could be partially overcome by utilizing mate pair technology, which provides pairs of short reads separated by a known distance along the genome. Results We have developed SOPRA, a tool designed to exploit the mate pair/paired-end information for assembly of short reads. The main focus of the algorithm is selecting a sufficiently large subset of simultaneously satisfiable mate pair constraints to achieve a balance between the size and the quality of the output scaffolds. Scaffold assembly is presented as an optimization problem for variables associated with vertices and with edges of the contig connectivity graph. Vertices of this graph are individual contigs with edges drawn between contigs connected by mate pairs. Similar graph problems have been invoked in the context of shotgun sequencing and scaffold building for previous generation of sequencing projects. However, given the error-prone nature of HTS data and the fundamental limitations from the shortness of the reads, the ad hoc greedy algorithms used in the earlier studies are likely to lead to poor quality results in the current context. SOPRA circumvents this problem by treating all the constraints on equal footing for solving the optimization problem, the solution itself indicating the problematic constraints (chimeric/repetitive contigs, etc.) to be removed. The process of solving and removing of constraints is iterated till one reaches a core set of consistent constraints. For SOLiD sequencer data, SOPRA uses a dynamic programming approach to robustly translate the color-space assembly to base-space. For assessing the quality of an assembly, we report the no-match/mismatch error rate as well as the rates of various

  8. Optimizing Site Selection in Urban Areas in Northern Switzerland

    NASA Astrophysics Data System (ADS)

    Plenkers, K.; Kraft, T.; Bethmann, F.; Husen, S.; Schnellmann, M.

    2012-04-01

    There is a need to observe weak seismic events (M<2) in areas close to potential nuclear-waste repositories or nuclear power plants, in order to analyze the underlying seismo-tectonic processes and estimate their seismic hazard. We are therefore densifying the existing Swiss Digital Seismic Network in northern Switzerland by additional 20 stations. The new network that will be in operation by the end of 2012, aims at observing seismicity in northern Switzerland with a completeness of M_c=1.0 and a location error < 0.5 km in epicenter and < 2 km in focal depth. Monitoring of weak seismic events in this region is challenging, because the area of interest is densely populated and geology is dominated by the Swiss molasse basin. A optimal network-design and a thoughtful choice for station-sites is, therefore, mandatory. To help with decision making we developed a step-wise approach to find the optimum network configuration. Our approach is based on standard network optimization techniques regarding the localization error. As a new feature, our approach uses an ambient noise model to compute expected signal-to-noise ratios for a given site. The ambient noise model uses information on land use and major infrastructures such as highways and train lines. We ran a series of network optimizations with increasing number of stations until the requirements regarding localization error and magnitude of completeness are reached. The resulting network geometry serves as input for the site selection. Site selection is done by using a newly developed multi-step assessment-scheme that takes into account local noise level, geology, infrastructure, and costs necessary to realize the station. The assessment scheme is weighting the different parameters and the most promising sites are identified. In a first step, all potential sites are classified based on information from topographic maps and site inspection. In a second step, local noise conditions are measured at selected sites. We

  9. Simulation of multi-steps thermal transition in 2D spin-crossover nanoparticles

    NASA Astrophysics Data System (ADS)

    Jureschi, Catalin-Maricel; Pottier, Benjamin-Louis; Linares, Jorge; Richard Dahoo, Pierre; Alayli, Yasser; Rotaru, Aurelian

    2016-04-01

    We have used an Ising like model to study the thermal behavior of a 2D spin crossover (SCO) system embedded in a matrix. The interaction parameter between edge SCO molecules and its local environment was included in the standard Ising like model as an additional term. The influence of the system's size and the ratio between the number of edge molecules and the other molecules were also discussed.

  10. Computation of Growth Rates of Random Sequences with Multi-step Memory

    NASA Astrophysics Data System (ADS)

    Zhang, Chenfei; Lan, Yueheng

    2013-02-01

    We extend the generating function approach to the computation of growth rate of random Fibonacci sequences with long memory. Functional iteration equations are obtained and its general form is conjectured and proved, based on which an asymptotic representation of the growth rate is obtained. The validity of both the derived and the conjectured formula are verified upon comparison with Monte Carlo simulation. A numerical scheme of the functional iteration is designed and implemented successfully.

  11. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  12. A Multi-Step Simulation Approach Toward Secure Fault Tolerant System Evaluation

    DTIC Science & Technology

    2010-01-01

    Level Dependability Analysis”, IEEE Transactions on Computers, v.46 n.1, p.60-74, January 1997 [11] B. Meyer , Object-Oriented Software Construction...Prentice Hall, 1988 [12] E. N. (Mootaz) Elnozahy , Lorenzo Alvisi , Yi-Min Wang , David B. Johnson, “A survey of rollback-recovery protocols in

  13. Multi-Step Attack Detection via Bayesian Modeling under Model Parameter Uncertainty

    ERIC Educational Resources Information Center

    Cole, Robert

    2013-01-01

    Organizations in all sectors of business have become highly dependent upon information systems for the conduct of business operations. Of necessity, these information systems are designed with many points of ingress, points of exposure that can be leveraged by a motivated attacker seeking to compromise the confidentiality, integrity or…

  14. A multi-step method for partial eigenvalue assignment problem of high order control systems

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Xu, Jiajia

    2017-09-01

    In this paper, we consider the partial eigenvalue assignment problem of high order control systems. Based on the orthogonality relations, we propose a new method for solving this problem by which the undesired eigenvalues are moved to desired values and keep the remaining eigenvalues unchanged. Using the inverse of Cauchy matrix, we give the solvable condition and the explicit solutions to this problem. Numerical examples show that our method is effective.

  15. The p27Kip1 Tumor Suppressor and Multi-Step Tumorigenesis

    DTIC Science & Technology

    2001-08-01

    completion of the Celera mouse genome . Sequenced IPCR clones were searched against the Celera database and clones that fell within the same Celera ...in all of the lymphomas containing XPC-1 insertions. There is significant sequence conservation between the murine XPC-1 locus and the syntenic human ...Xq26 region, and sequences homologous to A1464896 and the cloned insertion sites are present in the human Xq26 region with spacing quite similar to

  16. Multi-step regulation of interferon induction by hepatitis C virus.

    PubMed

    Oshiumi, Hiroyuki; Funami, Kenji; Aly, Hussein H; Matsumoto, Misako; Seya, Tsukasa

    2013-04-01

    Acute hepatitis C virus (HCV) infection evokes several distinct innate immune responses in host, but the virus usually propagates by circumventing these responses. Although a replication intermediate double-stranded RNA is produced in infected cells, type I interferon (IFN) induction and immediate cell death are largely blocked in infected cells. In vitro studies suggested that type I and III IFNs are mainly produced in HCV-infected hepatocytes if the MAVS pathway is functional, and dysfunction of this pathway may lead to cellular permissiveness to HCV replication and production. Cellular immunity, including natural killer cell activation and antigen-specific CD8 T-cell proliferation, occurs following innate immune activation in response to HCV, but is often ineffective for eradication of HCV. Constitutive dsRNA stimulation differs in output from type I IFN therapy, which has been an authentic therapy for patients with HCV. Host innate immune responses to HCV RNA/proteins may be associated with progressive hepatic fibrosis and carcinogenesis once persistent HCV infection is established in opposition to the IFN system. Hence, innate RNA sensing exerts pivotal functions against HCV genome replication and host pathogenesis through modulation of the IFN system. Molecules participating in the RIG-I and Toll-like receptor 3 pathways are the main targets for HCV, disabling the anti-viral functions of these IFN-inducing molecules. We discuss the mechanisms that abolish type I and type III IFN production in HCV-infected cells, which may contribute to understanding the mechanism of virus persistence and resistance to the IFN therapy.

  17. Bond strength of multi-step and simplified-step systems.

    PubMed

    Tjan, A H; Castelnuovo, J; Liu, P

    1996-12-01

    To measure and compare the in vitro shear bond strength (SBS) of the following three pairs of multi- and simplified-step dentin bonding systems: OptiBond vs. OptiBond FL, All-Bond 2 vs. One-Step, and Tenure vs. Tenure Quik. 60 extracted human mandibular molars were sectioned perpendicular to the long axis 1 mm above the CEJ to expose the dentin bonding surface. After being wet-ground to 600 grit with SiC abrasive papers, rinsed and dried, the teeth were individually mounted in phenolic rings with epoxy resin, and randomly assigned into six equal groups of 10 each. The dentin surfaces were treated with the above mentioned dentin bonding systems, and a gelatin cylinder filled with resin composite (Pertac-Hybrid) was directly bonded to each pretreated surface. After 7-day storage in 37 degrees C water followed by thermocycling, the specimens were shear tested to failure on an Instron machine. Data were analyzed by independent t-tests, one-way ANOVA, and Duncan's Multiple Comparison tests at alpha = 0.05. Except for the pair Tenure/Tenure Quik, the differences between the pairs All-Bond 2/One-Step and OptiBond/OptiBond FL were statistically significant with All-Bond 2 and OptiBond FL yielding higher shear bond strength (P < 0.05). Findings of this study indicated that OptiBond FL was the only simplified-step system showing improved bond strength.

  18. Multi-step control of muscle diversity by Hox proteins in the Drosophila embryo

    PubMed Central

    Enriquez, Jonathan; Boukhatmi, Hadi; Dubois, Laurence; Philippakis, Anthony A.; Bulyk, Martha L.; Michelson, Alan M.; Crozatier, Michèle; Vincent, Alain

    2010-01-01

    Summary Hox transcription factors control many aspects of animal morphogenetic diversity. The segmental pattern of Drosophila larval muscles shows stereotyped variations along the anteroposterior body axis. Each muscle is seeded by a founder cell and the properties specific to each muscle reflect the expression by each founder cell of a specific combination of ‘identity’ transcription factors. Founder cells originate from asymmetric division of progenitor cells specified at fixed positions. Using the dorsal DA3 muscle lineage as a paradigm, we show here that Hox proteins play a decisive role in establishing the pattern of Drosophila muscles by controlling the expression of identity transcription factors, such as Nautilus and Collier (Col), at the progenitor stage. High-resolution analysis, using newly designed intron-containing reporter genes to detect primary transcripts, shows that the progenitor stage is the key step at which segment-specific information carried by Hox proteins is superimposed on intrasegmental positional information. Differential control of col transcription by the Antennapedia and Ultrabithorax/Abdominal-A paralogs is mediated by separate cis-regulatory modules (CRMs). Hox proteins also control the segment-specific number of myoblasts allocated to the DA3 muscle. We conclude that Hox proteins both regulate and contribute to the combinatorial code of transcription factors that specify muscle identity and act at several steps during the muscle-specification process to generate muscle diversity. PMID:20056681

  19. Visible signatures of the multi-step transition to a beam-plasma-dicharge

    NASA Technical Reports Server (NTRS)

    Hallinan, T. J.; Leinbach, H.; Bernstein, W.

    1982-01-01

    Observations are presented of the beam-plasma-discharge (BPD) at pressures below 4 x 10 to the -6th Torr, which show that there are three abrupt transitions in the beam-plasma interactions. The low-current A1 state (the basic beam with its noded configuration), is dominated by direct collisional ionization of the background gas. In the A2 state (the noded beam surounded by a weak halo), the ionization is supplemented by another mechanism which perhaps involves cyclotron interactions. The B and C states are distinctly separate forms of the BPD which involve a convective redistribution of beam energy as indicated by the changes in the beam nodes. The B and C states are also found to involve enhancements of between 20-100 in the power dissipation by ionization. Thus, in a 20 m pathlength, it is found that approximately 1-4 percent of the beam power is dissipated by the ionization associated with the BPD.

  20. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    SciTech Connect

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; Peterson, Joshua L.; Johnson, Seth R.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple because it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.

  1. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE PAGES

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.; ...

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  2. Multi-step inhibition explains HIV-1 protease inhibitor pharmacodynamics and resistance.

    PubMed

    Rabi, S Alireza; Laird, Gregory M; Durand, Christine M; Laskey, Sarah; Shan, Liang; Bailey, Justin R; Chioma, Stanley; Moore, Richard D; Siliciano, Robert F

    2013-09-01

    HIV-1 protease inhibitors (PIs) are among the most effective antiretroviral drugs. They are characterized by highly cooperative dose-response curves that are not explained by current pharmacodynamic theory. An unresolved problem affecting the clinical use of PIs is that patients who fail PI-containing regimens often have virus that lacks protease mutations, in apparent violation of fundamental evolutionary theory. Here, we show that these unresolved issues can be explained through analysis of the effects of PIs on distinct steps in the viral life cycle. We found that PIs do not affect virion release from infected cells but block entry, reverse transcription, and post-reverse transcription steps. The overall dose-response curves could be reconstructed by combining the curves for each step using the Bliss independence principle, showing that independent inhibition of multiple distinct steps in the life cycle generates the highly cooperative dose-response curves that make these drugs uniquely effective. Approximately half of the inhibitory potential of PIs is manifest at the entry step, likely reflecting interactions between the uncleaved Gag and the cytoplasmic tail (CT) of the Env protein. Sequence changes in the CT alone, which are ignored in current clinical tests for PI resistance, conferred PI resistance, providing an explanation for PI failure without resistance.

  3. Multi-step process control and characterization of scanning probe lithography

    NASA Astrophysics Data System (ADS)

    Peterson, C. A.; Ruskell, T. G.; Pyle, J. L.; Workman, R. K.; Yao, X.; Hunt, J. P.; Sarid, D.; Parks, H. G.; Vermeire, B.

    An atomic force microscope with a conducting tip (CT-AFM) was used to fabricate and characterize nanometer scale lines of (1) silicon oxide and (2) silicon nitride on H-terminated n-type silicon (100) wafers. In process (1), a negative bias was applied to the tip of the CT-AFM system and the resulting electric field caused electrolysis of ambient water vapor and local oxidation of the silicon surface. In addition, the accompanying current was detected by a sub-pA current amplifier. In process (2), the presence of a nitrogen atmosphere containing a small partial pressure of ammonia resulted in the local nitridation of the surface. The CT-AFM system was also used to locate and study the dielectric properties of the silicon-oxide lines as well as copper islands buried under 20 nm of silicon dioxide. A computer-controlled feedback system and raster scanning of the sample produced simultaneous topographic and Fowler-Nordheim tunneling maps of the structures under study. Detailed aspects of nanolithography and local-probe Fowler-Nordheim characterization using a CT-AFM will be discussed.

  4. [Successful multi-step management of developmental heart defects after intrauterine diagnosis].

    PubMed

    Hartyánszky, I; Kádár, K; Oprea, V; Palik, I; Sápi, E; Prodán, Z; Bodor, G; Mihályi, S

    1997-03-23

    At 28th week of gestation a conotruncal malformation with ventricular septal defect was diagnosed by fetal echocardiography. Postnatal echocardiographic and angiocardiographic examinations confirmed the diagnosis of conotruncal malformation (pulmonary atresia, ventricular septal defect, patent ductus arteriosus, aortopulmonary collateral arteries). The unifocalization (age: 11 months) and total correction with aortic homograft (age: 7 years) were performed. To our knowledge our case is the first whose intrauterine diagnosis of complex congenital heart disease was confirmed after delivery and had successful two-stage surgical management.

  5. Use of DBMS in Multi-step Information Systems for LANDSAT

    NASA Technical Reports Server (NTRS)

    Noll, C. E.

    1984-01-01

    Data are obtained by the thematic mapper on LANDSAT 4 in seven bands and are telemetered and electronically recorded at ground station where the data must be geometrically and rediometrically corrected before a photographic image is produced. Current system characteristics for processing this information are described including the menu for data products reports. The tracking system provides up-to-date and complete information and requires that production stages adhere to the inherent DBMS structure. The concept can be applied to any procedures requiring status information.

  6. A multi-step assembly process: drawing, flanging and hemming of metallic sheets

    NASA Astrophysics Data System (ADS)

    Manach, P. Y.; Le Maoût, N.; Thuillier, S.

    2010-06-01

    This paper presents hemming tests on complex geometries, combining curved surfaces and radii of curvature in the plane. The samples are firstly prestrained in order to obtain a strain history prior to flanging and hemming. The choice of the sample geometries as well as prior plastic strains is based on a survey of current geometries hemmed in automotive doors. A device has been designed to hem these samples both by classical and roll-hemming processes and to allow a comparison between both technologies. Roll-in, which characterizes the change of geometry of the hemmed zone between flanging and hemming, and loads are obtained during this multistep process. Results show that roll-in observed in roll-hemming is lower than in classical hemming and that its evolution greatly differs between the two processes. The analysis of the results on different samples shows that it is difficult to establish rules on the variation of other parameters in such a complex multistep process and that it requires an intensive use of numerical simulation.

  7. Modeling the Auto-Ignition of Biodiesel Blends with a Multi-Step Model

    SciTech Connect

    Toulson, Dr. Elisa; Allen, Casey M; Miller, Dennis J; McFarlane, Joanna; Schock, Harold; Lee, Tonghun

    2011-01-01

    There is growing interest in using biodiesel in place of or in blends with petrodiesel in diesel engines; however, biodiesel oxidation chemistry is complicated to directly model and existing surrogate kinetic models are very large, making them computationally expensive. The present study describes a method for predicting the ignition behavior of blends of n-heptane and methyl butanoate, fuels whose blends have been used in the past as a surrogate for biodiesel. The autoignition is predicted using a multistep (8-step) model in order to reduce computational time and make this a viable tool for implementation into engine simulation codes. A detailed reaction mechanism for n-heptane-methyl butanoate blends was used as a basis for validating the multistep model results. The ignition delay trends predicted by the multistep model for the n-heptane-methyl butanoate blends matched well with that of the detailed CHEMKIN model for the majority of conditions tested.

  8. Deciphering the multi-step degradation mechanisms of carbonate-based electrolyte in Li batteries

    NASA Astrophysics Data System (ADS)

    Gachot, Gregory; Grugeon, Sylvie; Armand, Michel; Pilard, Serge; Guenot, Pierre; Tarascon, Jean-Marie; Laruelle, Stephane

    Electrolytes are crucial to the safety and long life of Li-ion batteries, however, the understanding of their degradation mechanisms is still sketchy. Here we report on the nature and formation of organic/inorganic degradation products generated at low potential in a lithium-based cell using cyclic and linear carbonate-based electrolyte mixtures. The global formation mechanism of ethylene oxide oligomers produced from EC/DMC (1/1 w/w)-LiPF 6 salt (1 M) electrolyte decomposition is proposed then mimicked via chemical tests. Each intermediary product structure/formula/composition is identified by means of combined NMR, FTIR and high resolution mass spectrometry (ESI-HRMS) analysis. The key role played by lithium methoxide as initiator of the electrolyte degradation is evidenced, but more importantly we isolated for the first time lithium methyl carbonate as a side product of the ethylene oxide oligomers chemical formation. The same degradation mechanism was found to hold on for another cyclic and linear carbonate-based electrolyte such as EC/DEC (1/1 w/w)-LiPF 6 salt (1 M). Such findings have important implications in the choice of chemical additives for developing highly performing electrolytes.

  9. Multi-step usage of in vivo models during rational drug design and discovery.

    PubMed

    Williams, Charles H; Hong, Charles C

    2011-01-01

    In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET) properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design.

  10. Successive magnetic transitions and multi-step magnetization in GdBC

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akinori; Muramoto, Akihiro; Noguchi, Satoru

    2003-05-01

    We report the results of the magnetization measurements in the GdBC single crystal using a pulsed-magnet system up to 30 T and a SQUID magnetometer up to 5 T. The magnetization for the b-axis at 4.2 K shows three steps at 1, 5 and 15 T, being saturated above 23 T. The saturation moment is almost 7 μB/Gd. Temperature dependence of the step fields is obtained for all axes. These imply that GdBC has the successive antiferromagnetic transitions with the complex magnetic structures in spite of the simple spin system of Gd 3+.

  11. Multi-step shot noise spectrum induced by a local large spin

    NASA Astrophysics Data System (ADS)

    Niu, Peng-Bin; Shi, Yun-Long; Sun, Zhu; Nie, Yi-Hang

    2015-12-01

    We use non-equilibrium Green’s function method to analyze the shot noise spectrum of artificial single molecular magnets (ASMM) model in the strong spin-orbit coupling limit in sequential tunneling regime, mainly focusing on the effects of local large spin. In the linear response regime, the shot noise shows 2S + 1 peaks and is strongly spin-dependent. In the nonlinear response regime, one can observe 2S + 1 steps in shot noise and Fano factor. In these steps one can see the significant enhancement effect due to the spin-dependent multi-channel process of local large spin, which reduces electron correlations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11504210, 11504211, 11504212, 11274207, 11274208, 11174115, and 11325417), the Key Program of the Ministry of Education of China (Grant No. 212018), the Scientific and Technological Project of Shanxi Province, China (Grant No. 2015031002-2), the Natural Science Foundation of Shanxi Province, China (Grant Nos. 2013011007-2 and 2013021010-5), and the Outstanding Innovative Teams of Higher Learning Institutions of Shanxi Province, China.

  12. Disintegration of protein microbubbles in presence of acid and surfactants: a multi-step process.

    PubMed

    Rovers, Tijs A M; Sala, Guido; van der Linden, Erik; Meinders, Marcel B J

    2015-08-28

    The stability of protein microbubbles against addition of acid or surfactants was investigated. When these compounds were added, the microbubbles first released the encapsulated air. Subsequently, the protein shell completely disintegrated into nanometer-sized particles. The decrease in the number of intact microbubbles could be well described with the Weibull distribution. This distribution is based on two parameters, which suggests that two phenomena are responsible for the fracture of the microbubble shell. The microbubble shell is first weakened. Subsequently, the weakened protein shell fractures randomly. The probability of fracture turned out to be exponentially proportional to the concentration of acid and surfactant. A higher decay rate and a lower average breaking time were observed at higher acid or surfactant concentrations. For different surfactants, different decay rates were observed. The fact that the microbubble shell was ultimately disintegrated into nanometer-sized particles upon addition of acid or surfactants indicates that the interactions in the shell are non-covalent and most probably hydrophobic. After acid addition, the time at which the complete disintegration of the shell was observed coincided with the time of complete microbubble decay (release of air), while in the case of surfactant addition, there was a significant time gap between complete microbubble decay and complete shell disintegration.

  13. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  14. A multi-step model for the origin of E3 (enstatite) chondrites

    NASA Astrophysics Data System (ADS)

    Hutson, Melinda; Ruzicka, Alex

    2000-05-01

    It appears that the mineralogy and chemical properties of type 3 enstatite chondrites could have been established by fractionation processes (removal of a refractory component, and depletion of water) in the solar nebula, and by equilibration with nebular gas at low-to-intermediate temperatures (~700-950 K). We describe a model for the origin of type 3 enstatite chondrites that for the first time can simultaneously account for the mineral abundances, bulk-chemistry, and phase compositions of these chondrites, by the operation of plausible processes in the solar nebula. This model, which assumes a representative nebular gas pressure of 10-5 bar, entails three steps: (1) initial removal of 56% of the equilibrium condensed phases in a system of solar composition at 1270 K; (2) an average loss of 80-85% water vapor in the remaining gas; and (3) two different closure temperatures for the condensed phases. The first step involves a "refractory-element fractionation" and is needed to account for the overall major-element composition of enstatite chondrites, assuming an initial system with a solar composition. The second step, water-vapor depletion, is needed to stabilize Si-bearing metal, oldhamite, and niningerite, which are characteristic minerals of the enstatite chondrites. Variations in closure temperatures are suggested by the way in which the bulk chemistry and mineral assemblages of predicted condensates change with temperature, and how these parameters correlate with the observations of enstatite chondrites. In general, most phases in type 3 enstatite chondrites appear to have ceased equilibrating with nebular gas at ~900-950 K, except for Fe-metal, which continued to partially react with nebular gas to temperatures as low as ~700 K.

  15. Multi-Step Attack Detection via Bayesian Modeling under Model Parameter Uncertainty

    ERIC Educational Resources Information Center

    Cole, Robert

    2013-01-01

    Organizations in all sectors of business have become highly dependent upon information systems for the conduct of business operations. Of necessity, these information systems are designed with many points of ingress, points of exposure that can be leveraged by a motivated attacker seeking to compromise the confidentiality, integrity or…

  16. Enhancing multi-step quantum state tomography by PhaseLift

    NASA Astrophysics Data System (ADS)

    Lu, Yiping; Zhao, Qing

    2017-09-01

    Multi-photon system has been studied by many groups, however the biggest challenge faced is the number of copies of an unknown state are limited and far from detecting quantum entanglement. The difficulty to prepare copies of the state is even more serious for the quantum state tomography. One possible way to solve this problem is to use adaptive quantum state tomography, which means to get a preliminary density matrix in the first step and revise it in the second step. In order to improve the performance of adaptive quantum state tomography, we develop a new distribution scheme of samples and extend it to three steps, that is to correct it once again based on the density matrix obtained in the traditional adaptive quantum state tomography. Our numerical results show that the mean square error of the reconstructed density matrix by our new method is improved to the level from 10-4 to 10-9 for several tested states. In addition, PhaseLift is also applied to reduce the required storage space of measurement operator.

  17. Multi-step approach in a complex case of Cushing's syndrome and medullary thyroid carcinoma.

    PubMed

    Parenti, G; Nassi, R; Silvestri, S; Bianchi, S; Valeri, A; Manca, G; Mangiafico, S; Ammannati, F; Serio, M; Mannelli, M; Peri, A

    2006-02-01

    The diagnosis of Cushing's syndrome (CS) may sometimes be cumbersome. In particular, in ACTH-dependent CS it may be difficult to distinguish between the presence of an ACTH-secreting pituitary adenoma and ectopic ACTH and/or CRH secretion. In such instances, the etiology of CS may remain unknown despite extensive diagnostic workout, and the best therapeutic option for each patient has to be determined. We report here the case of a 54-yr-old man affected by ACTH-dependent CS in association with a left adrenal adenoma and medullary thyroid carcinoma (MTC). He presented with clinical features and laboratory indexes of hypercortisolism associated with elevated levels of calcitonin. Ectopic CS due to MTC was reported previously. In our case hypercortisolism persisted after surgical treatment of MTC. Thorough diagnostic assessment was performed, in order to define the aetiology of CS. He was subjected to basal and dynamic hormonal evaluation, including bilateral inferior petrosal sinus sampling. Extensive imaging evaluation was also performed. Overall, the laboratory data together with the results of radiological procedures suggested that CS might be due to inappropriate CRH secretion. However, the source of CRH secretion in this patient remained unknown. It was then decided to remove the left adenomatous adrenal gland. Cortisol level fell and has remained within the normal range nine months after surgery. This case well depicts the complexity of the diagnostic workout, which is needed sometimes to correctly diagnose and treat CS, and suggests that monolateral adrenalectomy may represent, at least temporarily, a reasonable therapeutic option in occult ACTH-dependent hypercortisolism.

  18. A multi-step transmission electron microscopy sample preparation technique for cracked, heavily damaged, brittle materials.

    PubMed

    Weiss Brennan, Claire V; Walck, Scott D; Swab, Jeffrey J

    2014-12-01

    A new technique for the preparation of heavily cracked, heavily damaged, brittle materials for examination in a transmission electron microscope (TEM) is described in detail. In this study, cross-sectional TEM samples were prepared from indented silicon carbide (SiC) bulk ceramics, although this technique could also be applied to other brittle and/or multiphase materials. During TEM sample preparation, milling-induced damage must be minimized, since in studying deformation mechanisms, it would be difficult to distinguish deformation-induced cracking from cracking occurring due to the sample preparation. The samples were prepared using a site-specific, two-step ion milling sequence accompanied by epoxy vacuum infiltration into the cracks. This technique allows the heavily cracked, brittle ceramic material to stay intact during sample preparation and also helps preserve the true microstructure of the cracked area underneath the indent. Some preliminary TEM results are given and discussed in regards to deformation studies in ceramic materials. This sample preparation technique could be applied to other cracked and/or heavily damaged materials, including geological materials, archaeological materials, fatigued materials, and corrosion samples.

  19. Optimal Battery Charging, Part 1: Minimizing Time-to-Charge, Energy Loss, and Temperature Rise for OCV-Resistance Battery Model

    DTIC Science & Technology

    2015-02-18

    Ni/MH batteries, J. Power Sources (September 2004) 180e185. [21] T. Ikeya, N. Sawada, S. Takagi, J. Murakami , K. Kobayashi, et al., Multi-step constant...1998) 101e107. [22] T. Ikeya, N. Sawada, J. Murakami , K. Kobayashi, et al., Multi-step constant- current charging method for an electric vehicle

  20. Discovery of optimal zeolites for challenging separations and chemical conversions through predictive materials modeling

    NASA Astrophysics Data System (ADS)

    Siepmann, J. Ilja; Bai, Peng; Tsapatsis, Michael; Knight, Chris; Deem, Michael W.

    2015-03-01

    Zeolites play numerous important roles in modern petroleum refineries and have the potential to advance the production of fuels and chemical feedstocks from renewable resources. The performance of a zeolite as separation medium and catalyst depends on its framework structure and the type or location of active sites. To date, 213 framework types have been synthesized and >330000 thermodynamically accessible zeolite structures have been predicted. Hence, identification of optimal zeolites for a given application from the large pool of candidate structures is attractive for accelerating the pace of materials discovery. Here we identify, through a large-scale, multi-step computational screening process, promising zeolite structures for two energy-related applications: the purification of ethanol beyond the ethanol/water azeotropic concentration in a single separation step from fermentation broths and the hydroisomerization of alkanes with 18-30 carbon atoms encountered in petroleum refining. These results demonstrate that predictive modeling and data-driven science can now be applied to solve some of the most challenging separation problems involving highly non-ideal mixtures and highly articulated compounds. Financial support from the Department of Energy Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences under Award DE-FG02-12ER16362 is gratefully acknowledged.

  1. An Optimal Cure Process to Minimize Residual Void and Optical Birefringence for a LED Silicone Encapsulant

    PubMed Central

    Song, Min-Jae; Kim, Kwon-Hee; Yoon, Gil-Sang; Park, Hyung-Pil; Kim, Heung-Kyu

    2014-01-01

    Silicone resin has recently attracted great attention as a high-power Light Emitting Diode (LED) encapsulant material due to its good thermal stability and optical properties. In general, the abrupt curing reaction of the silicone resin for the LED encapsulant during the curing process induces reduction in the mechanical and optical properties of the LED product due to the generation of residual void and moisture, birefringence, and residual stress in the final formation. In order to prevent such an abrupt curing reaction, the reduction of residual void and birefringence of the silicone resin was observed through experimentation by introducing the multi-step cure processes, while the residual stress was calculated by conducting finite element analysis that coupled the heat of cure reaction and cure shrinkage. The results of experiment and analysis showed that it was during the three-step curing process that the residual void, birefringence, and residual stress reduced the most in similar tendency. Through such experimentation and finite element analysis, the study was able to confirm that the optimization of the LED encapsulant packaging process was possible. PMID:28788666

  2. Optimization of composite structures

    NASA Technical Reports Server (NTRS)

    Stroud, W. J.

    1982-01-01

    Structural optimization is introduced and examples which illustrate potential problems associated with optimized structures are presented. Optimized structures may have very low load carrying ability for an off design condition. They tend to have multiple modes of failure occurring simultaneously and can, therefore, be sensitive to imperfections. Because composite materials provide more design variables than do metals, they allow for more refined tailoring and more extensive optimization. As a result, optimized composite structures can be especially susceptible to these problems.

  3. Particle Swarm Optimization Toolbox

    NASA Technical Reports Server (NTRS)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  4. Aristos Optimization Package

    SciTech Connect

    Ridzal, Danis

    2007-03-01

    Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the area of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.

  5. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  6. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  7. Optimal probabilistic search

    SciTech Connect

    Lokutsievskiy, Lev V

    2011-05-31

    This paper is concerned with the optimal search of an object at rest with unknown exact position in the n-dimensional space. A necessary condition for optimality of a trajectory is obtained. An explicit form of a differential equation for an optimal trajectory is found while searching over R-strongly convex sets. An existence theorem is also established. Bibliography: 8 titles.

  8. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    SciTech Connect

    Wisotzky, Eric E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J.

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events are taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.

  9. Global optimal eBURST analysis of multilocus typing data using a graphic matroid approach.

    PubMed

    Francisco, Alexandre P; Bugalho, Miguel; Ramirez, Mário; Carriço, João A

    2009-05-18

    Multilocus Sequence Typing (MLST) is a frequently used typing method for the analysis of the clonal relationships among strains of several clinically relevant microbial species. MLST is based on the sequence of housekeeping genes that result in each strain having a distinct numerical allelic profile, which is abbreviated to a unique identifier: the sequence type (ST). The relatedness between two strains can then be inferred by the differences between allelic profiles. For a more comprehensive analysis of the possible patterns of evolutionary descent, a set of rules were proposed and implemented in the eBURST algorithm. These rules allow the division of a data set into several clusters of related strains, dubbed clonal complexes, by implementing a simple model of clonal expansion and diversification. Within each clonal complex, the rules identify which links between STs correspond to the most probable pattern of descent. However, the eBURST algorithm is not globally optimized, which can result in links, within the clonal complexes, that violate the rules proposed. Here, we present a globally optimized implementation of the eBURST algorithm - goeBURST. The search for a global optimal solution led to the formalization of the problem as a graphic matroid, for which greedy algorithms that provide an optimal solution exist. Several public data sets of MLST data were tested and differences between the two implementations were found and are discussed for five bacterial species: Enterococcus faecium, Streptococcus pneumoniae, Burkholderia pseudomallei, Campylobacter jejuni and Neisseria spp.. A novel feature implemented in goeBURST is the representation of the level of tiebreak rule reached before deciding if a link should be drawn, which can used to visually evaluate the reliability of the represented hypothetical pattern of descent. goeBURST is a globally optimized implementation of the eBURST algorithm, that identifies alternative patterns of descent for several

  10. Aircraft configuration optimization including optimized flight profiles

    NASA Technical Reports Server (NTRS)

    Mccullers, L. A.

    1984-01-01

    The Flight Optimization System (FLOPS) is an aircraft configuration optimization program developed for use in conceptual design of new aircraft and in the assessment of the impact of advanced technology. The modular makeup of the program is illustrated. It contains modules for preliminary weights estimation, preliminary aerodynamics, detailed mission performance, takeoff and landing, and execution control. An optimization module is used to drive the overall design and in defining optimized profiles in the mission performance. Propulsion data, usually received from engine manufacturers, are used in both the mission performance and the takeoff and landing analyses. Although executed as a single in-core program, the modules are stored separately so that the user may select the appropriate modules (e.g., fighter weights versus transport weights) or leave out modules that are not needed.

  11. Hope, optimism and delusion

    PubMed Central

    McGuire-Snieckus, Rebecca

    2014-01-01

    Optimism is generally accepted by psychiatrists, psychologists and other caring professionals as a feature of mental health. Interventions typically rely on cognitive-behavioural tools to encourage individuals to ‘stop negative thought cycles’ and to ‘challenge unhelpful thoughts’. However, evidence suggests that most individuals have persistent biases of optimism and that excessive optimism is not conducive to mental health. How helpful is it to facilitate optimism in individuals who are likely to exhibit biases of optimism already? By locating the cause of distress at the individual level and ‘unhelpful’ cognitions, does this minimise wider systemic social and economic influences on mental health? PMID:25237497

  12. Optimization of computations

    SciTech Connect

    Mikhalevich, V.S.; Sergienko, I.V.; Zadiraka, V.K.; Babich, M.D.

    1994-11-01

    This article examines some topics of optimization of computations, which have been discussed at 25 seminar-schools and symposia organized by the V.M. Glushkov Institute of Cybernetics of the Ukrainian Academy of Sciences since 1969. We describe the main directions in the development of computational mathematics and present some of our own results that reflect a certain design conception of speed-optimal and accuracy-optimal (or nearly optimal) algorithms for various classes of problems, as well as a certain approach to optimization of computer computations.

  13. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  14. Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  15. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  16. Greedy Learning of Graphical Models with Small Girth

    DTIC Science & Technology

    2013-01-01

    with the Department of Electrical and Computer Engineering , The University of Texas at Austin, USA, Emails: avik@utexas.edu, sanghavi@mail.utexas.edu...61, pp. 401-425, 1996. [7] A. Dobra , C. Hans, B. Jones, J. R. Nevins, G. Yao, and M. West, “Sparse graphical models for exploring gene expression data

  17. Fitness landscapes, memetic algorithms, and greedy operators for graph bipartitioning.

    PubMed

    Merz, P; Freisleben, B

    2000-01-01

    The fitness landscape of the graph bipartitioning problem is investigated by performing a search space analysis for several types of graphs. The analysis shows that the structure of the search space is significantly different for the types of instances studied. Moreover, with increasing epistasis, the amount of gene interactions in the representation of a solution in an evolutionary algorithm, the number of local minima for one type of instance decreases and, thus, the search becomes easier. We suggest that other characteristics besides high epistasis might have greater influence on the hardness of a problem. To understand these characteristics, the notion of a dependency graph describing gene interactions is introduced. In particular, the local structure and the regularity of the dependency graph seems to be important for the performance of an algorithm, and in fact, algorithms that exploit these properties perform significantly better than others which do not. It will be shown that a simple hybrid multi-start local search exploiting locality in the structure of the graphs is able to find optimum or near optimum solutions very quickly. However, if the problem size increases or the graphs become unstructured, a memetic algorithm (a genetic algorithm incorporating local search) is shown to be much more effective.

  18. Face sketch synthesis via sparse representation-based greedy search.

    PubMed

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang

    2015-08-01

    Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics.

  19. GPSR: Greedy Perimeter Stateless Routing for Wireless Networks

    DTIC Science & Technology

    2005-01-01

    use sim- ulation parameters identical to a subset of those used by Broch et al. [4]. Our simulations are for networks of 50, 112, and 200 nodes with...as they are the most demanding of a routing algorithm. Broch at al. also simu- lated 300-, 600-, and 900-second pause times, perhaps in large part...sending nodes. Each CBR flow sends at 2 Kbps, and uses 64-byte packets. Broch et al. simulated a wider range of flow counts (10, 20, and 30 flows); we

  20. Optimization and optimal statistics in neuroscience

    NASA Astrophysics Data System (ADS)

    Brookings, Ted

    Complex systems have certain common properties, with power law statistics being nearly ubiquitous. Despite this commonality, we show that a variety of mechanisms can be responsible for complexity, illustrated by the example of a lattice on a Cayley Tree. Because of this, analysis must probe more deeply than merely looking for power laws, instead details of the dynamics must be examined. We show how optimality---a frequently-overlooked source of complexity---can produce typical features such as power laws, and describe inherent trade-offs in optimal systems, such as performance vs. robustness to rare disturbances. When applied to biological systems such as the nervous system, optimality is particularly appropriate because so many systems have identifiable purpose. We show that the "grid cells" in rats are extremely efficient in storing position information. Assuming the system to be optimal allows us to describe the number and organization of grid cells. By analyzing systems from an optimal perspective provides insights that permit description of features that would otherwise be difficult to observe. As well, careful analysis of complex systems requires diligent avoidance of assumptions that are unnecessary or unsupported. Attributing unwarranted meaning to ambiguous features, or assuming the existence of a priori constraints may quickly lead to faulty results. By eschewing unwarranted and unnecessary assumptions about the distribution of neural activity and instead carefully integrating information from EEG and fMRI, we are able to dramatically improve the quality of source-localization. Thus maintaining a watchful eye towards principles of optimality, while avoiding unnecessary statistical assumptions is an effective theoretical approach to neuroscience.

  1. Search-based optimization.

    PubMed

    Wheeler, Ward C

    2003-08-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  2. Energy optimization system

    DOEpatents

    Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat

    2013-01-22

    A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.

  3. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  4. Search-based optimization

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  5. Analysis of flavonoids from lotus (Nelumbo nucifera) leaves using high performance liquid chromatography/photodiode array detector tandem electrospray ionization mass spectrometry and an extraction method optimized by orthogonal design.

    PubMed

    Chen, Sha; Wu, Ben-Hong; Fang, Jin-Bao; Liu, Yan-Ling; Zhang, Hao-Hao; Fang, Lin-Chuan; Guan, Le; Li, Shao-Hua

    2012-03-02

    The extraction protocol of flavonoids from lotus (Nelumbo nucifera) leaves was optimized through an orthogonal design. The solvent was the most important factor comparing solvent, solvent:tissue ratio, extraction time, and temperature. The highest yield of flavonoids was achieved with 70% methanol-water and a solvent:tissue ratio of 30:1 at 4 °C for 36 h. The optimized analytical method for HPLC was a multi-step gradient elution using 0.5% formic acid (A) and CH₃CN containing 0.1% formic acid (B), at a flow rate of 0.6 mL/min. Using this optimized method, thirteen flavonoids were simultaneously separated and identified by high performance liquid chromatography coupled with photodiode array detection/electrospray ionization mass spectrometry (HPLC/DAD/ESI-MS(n)). Five of the bioactive compounds are reported in lotus leaves for the first time. The flavonoid content of the leaves of three representative cultivars was assessed under the optimized extraction and HPLC analytical conditions, and the seed-producing cultivar 'Baijianlian' had the highest flavonoid content compared with rhizome-producing 'Zhimahuoulian' and wild floral cultivar 'Honglian'. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Homotopy optimization methods for global optimization.

    SciTech Connect

    Dunlavy, Daniel M.; O'Leary, Dianne P. (University of Maryland, College Park, MD)

    2005-12-01

    We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.

  7. Optimal outpatient appointment scheduling.

    PubMed

    Kaandorp, Guido C; Koole, Ger

    2007-09-01

    In this paper optimal outpatient appointment scheduling is studied. A local search procedure is derived that converges to the optimal schedule with a weighted average of expected waiting times of patients, idle time of the doctor and tardiness (lateness) as objective. No-shows are allowed to happen. For certain combinations of parameters the well-known Bailey-Welch rule is found to be the optimal appointment schedule.

  8. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  9. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  10. Strategies for selecting optimal sampling and work-up procedures for analysing alkylphenol polyethoxylates in effluents from non-activated sludge biofilm reactors.

    PubMed

    Stenholm, Ake; Holmström, Sara; Hjärthag, Sandra; Lind, Ola

    2012-01-01

    Trace-level analysis of alkylphenol polyethoxylates (APEOs) in wastewater containing sludge requires the prior removal of contaminants and preconcentration. In this study, the effects on optimal work-up procedures of the types of alkylphenols present, their degree of ethoxylation, the biofilm wastewater treatment and the sample matrix were investigated for these purposes. The sampling spot for APEO-containing specimens from an industrial wastewater treatment plant was optimized, including a box that surrounded the tubing outlet carrying the wastewater, to prevent sedimented sludge contaminating the collected samples. Following these changes, the sampling precision (in terms of dry matter content) at a point just under the tubing leading from the biofilm reactors was 0.7% RSD. The findings were applied to develop a work-up procedure for use prior to a high-performance liquid chromatography-fluorescence detection analysis method capable of quantifying nonylphenol polyethoxylates (NPEOs) and poorly investigated dinonylphenol polyethoxylates (DNPEOs) at low microg L(-1) concentrations in effluents from non-activated sludge biofilm reactors. The selected multi-step work-up procedure includes lyophilization and pressurized fluid extraction (PFE) followed by strong ion exchange solid phase extraction (SPE). The yields of the combined procedure, according to tests with NP10EO-spiked effluent from a wastewater treatment plant, were in the 62-78% range.

  11. Implementing optimal thinning strategies

    Treesearch

    Kurt H. Riitters; J. Douglas Brodie

    1984-01-01

    Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....

  12. Elastic swimming I: Optimization

    NASA Astrophysics Data System (ADS)

    Lauga, Eric; Yu, Tony; Hosoi, Anette

    2006-03-01

    We consider the problem of swimming at low Reynolds number by oscillating an elastic filament in a viscous liquid, as investigated by Wiggins and Goldstein (1998, Phys Rev Lett). In this first part of the study, we characterize the optimal forcing conditions of the swimming strategy and its optimal geometrical characteristics.

  13. Optimal synchronization in space.

    PubMed

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of "wire" available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l) proportional to l(-alpha), with exponents alpha increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  14. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  15. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  16. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  17. Contingency contractor optimization.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen

    2013-10-01

    The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.

  18. Contingency contractor optimization.

    SciTech Connect

    Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin David.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen

    2013-06-01

    The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.

  19. Optimal Linear Control.

    DTIC Science & Technology

    1979-12-01

    OPTIMAL LINEAR CONTROL C.A. HARVEY M.G. SAFO NOV G. STEIN J.C. DOYLE HONEYWELL SYSTEMS & RESEARCH CENTER j 2600 RIDGWAY PARKWAY j [ MINNEAPOLIS...RECIPIENT’S CAT ALC-’ W.IMIJUff’? * J~’ CR2 15-238-4F TP P EI)ŕll * (~ Optimal Linear Control ~iOGRPR UBA m a M.G Lnar o Con_ _ _ _ _ _ R PORT__ _ _ I RE...Characterizations of optimal linear controls have been derived, from which guides for selecting the structure of the control system and the weights in

  20. Rapid Optimization Library

    SciTech Connect

    Denis Rldzal, Drew Kouri

    2014-05-13

    ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used to solve optimal design problems and inverse problems based on a variety of simulation software.

  1. Optimal control computer programs

    NASA Technical Reports Server (NTRS)

    Kuo, F.

    1992-01-01

    The solution of the optimal control problem, even with low order dynamical systems, can usually strain the analytical ability of most engineers. The understanding of this subject matter, therefore, would be greatly enhanced if a software package existed that could simulate simple generic problems. Surprisingly, despite a great abundance of commercially available control software, few, if any, address the part of optimal control in its most generic form. The purpose of this paper is, therefore, to present a simple computer program that will perform simulations of optimal control problems that arise from the first necessary condition and the Pontryagin's maximum principle.

  2. Optimal domain decomposition strategies

    NASA Technical Reports Server (NTRS)

    Yoon, Yonghyun; Soni, Bharat K.

    1995-01-01

    The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.

  3. Optimal quantum pumps.

    PubMed

    Avron, J E; Elgart, A; Graf, G M; Sadun, L

    2001-12-03

    We study adiabatic quantum pumps on time scales that are short relative to the cycle of the pump. In this regime the pump is characterized by the matrix of energy shift which we introduce as the dual to Wigner's time delay. The energy shift determines the charge transport, the dissipation, the noise, and the entropy production. We prove a general lower bound on dissipation in a quantum channel and define optimal pumps as those that saturate the bound. We give a geometric characterization of optimal pumps and show that they are noiseless and transport integral charge in a cycle. Finally we discuss an example of an optimal pump related to the Hall effect.

  4. Thermophotovoltaic Array Optimization

    SciTech Connect

    SBurger; E Brown; K Rahner; L Danielson; J Openlander; J Vell; D Siganporia

    2004-07-29

    A systematic approach to thermophotovoltaic (TPV) array design and fabrication was used to optimize the performance of a 192-cell TPV array. The systematic approach began with cell selection criteria that ranked cells and then matched cell characteristics to maximize power output. Following cell selection, optimization continued with an array packaging design and fabrication techniques that introduced negligible electrical interconnect resistance and minimal parasitic losses while maintaining original cell electrical performance. This paper describes the cell selection and packaging aspects of array optimization as applied to fabrication of a 192-cell array.

  5. Rapid Optimization Library

    SciTech Connect

    Denis Rldzal, Drew Kouri

    2014-05-13

    ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used to solve optimal design problems and inverse problems based on a variety of simulation software.

  6. Spares-optimized model

    NASA Technical Reports Server (NTRS)

    Cain, A. W.; Paulin, R. E.

    1979-01-01

    Computerized spares optimization for Space Shuttle Project comprises analytical process for developing spares quantification and budget forecasts. Model, which assesses risk associated with recommended spares quantities, is enconomical way to determine best mix of large number of spare types.

  7. Center for Parallel Optimization.

    DTIC Science & Technology

    1996-03-19

    A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.

  8. Flyby Geometry Optimization Tool

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2007-01-01

    The Flyby Geometry Optimization Tool is a computer program for computing trajectories and trajectory-altering impulsive maneuvers for spacecraft used in radio relay of scientific data to Earth from an exploratory airplane flying in the atmosphere of Mars.

  9. General shape optimization capability

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson

    1991-01-01

    A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.

  10. RF Gun Optimization Study

    SciTech Connect

    Alicia Hofler; Pavel Evtushenko

    2007-07-03

    Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. A genetic algorithm (GA) has been successfully used to optimize DC photo injector designs at Cornell University [1] and Jefferson Lab [2]. We propose to apply GA techniques to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize a system that has been benchmarked with beam measurements and simulation.

  11. RF Gun Optimization Study

    SciTech Connect

    A. S. Hofler; P. Evtushenko; M. Krasilnikov

    2007-08-01

    Injector gun design is an iterative process where the designer optimizes a few nonlinearly interdependent beam parameters to achieve the required beam quality for a particle accelerator. Few tools exist to automate the optimization process and thoroughly explore the parameter space. The challenging beam requirements of new accelerator applications such as light sources and electron cooling devices drive the development of RF and SRF photo injectors. RF and SRF gun design is further complicated because the bunches are space charge dominated and require additional emittance compensation. A genetic algorithm has been successfully used to optimize DC photo injector designs for Cornell* and Jefferson Lab**, and we propose studying how the genetic algorithm techniques can be applied to the design of RF and SRF gun injectors. In this paper, we report on the initial phase of the study where we model and optimize gun designs that have been benchmarked with beam measurements and simulation.

  12. Optimizing influenza vaccine distribution.

    PubMed

    Medlock, Jan; Galvani, Alison P

    2009-09-25

    The criteria to assess public health policies are fundamental to policy optimization. Using a model parametrized with survey-based contact data and mortality data from influenza pandemics, we determined optimal vaccine allocation for five outcome measures: deaths, infections, years of life lost, contingent valuation, and economic costs. We find that optimal vaccination is achieved by prioritization of schoolchildren and adults aged 30 to 39 years. Schoolchildren are most responsible for transmission, and their parents serve as bridges to the rest of the population. Our results indicate that consideration of age-specific transmission dynamics is paramount to the optimal allocation of influenza vaccines. We also found that previous and new recommendations from the U.S. Centers for Disease Control and Prevention both for the novel swine-origin influenza and, particularly, for seasonal influenza, are suboptimal for all outcome measures.

  13. Optimal operational conditions for the electrochemical regeneration of a soil washing EDTA solution.

    PubMed

    Cesaro, Raffaele; Esposito, Giovanni

    2009-02-01

    The present research deals with the optimization of the operating parameters (cathode replacement time, hydraulic retention time, current intensity and pH) of an electrochemical process aimed at the regeneration of a soil washing EDTA solution used for heavy metal extraction from a natural contaminated soil (excavated from Bellolampo, Palermo, Italy), which was vastly polluted with Cu (59 261.0 mg kg(-1)), Pb (14 178.1 mg kg(-1)) and Zn (14 084.9 mg kg(-1)). The electrolytic regeneration of the exhausted washing solution was performed in a laboratory scale electrolytic cell with 50 ml each cathodic and anodic chambers divided by a cation exchange membrane. Experiments II and III showed maximum Cu and Zn removal efficiencies from the EDTA solution, of 99.2+/-0.2 and 31.5+/-9.3%, respectively, when a current intensity of 0.25 A and a hydraulic retention time of 60 min were applied to the electrolytic cell, while the maximum Pb removal efficiency of 70.9+/-4.6% was obtained with a current intensity of 1.25 A and a hydraulic retention time of 60 min. During Experiment I the overall heavy metals removal efficiency was stable and close to 90% up to 20 h, while decreased to values lower than 80% after 40 h, indicating the occurrence of a significant saturation of the cathode graphite bed between 20 and 40 h. The capability of the regenerated EDTA solution to treat heavy metals polluted soils was tested in further experiments applying both a single and a multi-step washing treatment procedure. In particular, the latter showed the feasibility to increase heavy metal soil extractions over subsequent washing steps with Cu, Pb and Zn total removal efficiencies of 52.6, 100.0 and 41.3%, respectively.

  14. Introduction: optimization in networks.

    PubMed

    Motter, Adilson E; Toroczkai, Zoltan

    2007-06-01

    The recent surge in the network modeling of complex systems has set the stage for a new era in the study of fundamental and applied aspects of optimization in collective behavior. This Focus Issue presents an extended view of the state of the art in this field and includes articles from a large variety of domains in which optimization manifests itself, including physical, biological, social, and technological networked systems.

  15. Optimized Bolted Joint

    NASA Technical Reports Server (NTRS)

    Hart-Smith, L. J.; Bunin, B. L.; Watts, D. J.

    1986-01-01

    Computer technique aids joint optimization. Load-sharing between fasteners in multirow bolted composite joints computed by nonlinear-analysis computer program. Input to analysis was load-deflection data from 180 specimens tested as part of program to develop technology of structural joints for advanced transport aircraft. Bolt design optimization technique applicable to major joints in composite materials for primary and secondary structures and generally applicable for metal joints as well.

  16. Modeling using optimization routines

    NASA Technical Reports Server (NTRS)

    Thomas, Theodore

    1995-01-01

    Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.

  17. Depth Optimization Study

    SciTech Connect

    Kawase, Mitsuhiro

    2009-11-22

    The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.

  18. Optimization Of Simulated Trajectories

    NASA Technical Reports Server (NTRS)

    Brauer, Garry L.; Olson, David W.; Stevenson, Robert

    1989-01-01

    Program To Optimize Simulated Trajectories (POST) provides ability to target and optimize trajectories of point-mass powered or unpowered vehicle operating at or near rotating planet. Used successfully to solve wide variety of problems in mechanics of atmospheric flight and transfer between orbits. Generality of program demonstrated by its capability to simulate up to 900 distinct trajectory phases, including generalized models of planets and vehicles. VAX version written in FORTRAN 77 and CDC version in FORTRAN V.

  19. Modeling using optimization routines

    NASA Technical Reports Server (NTRS)

    Thomas, Theodore

    1995-01-01

    Modeling using mathematical optimization dynamics is a design tool used in magnetic suspension system development. MATLAB (software) is used to calculate minimum cost and other desired constraints. The parameters to be measured are programmed into mathematical equations. MATLAB will calculate answers for each set of inputs; inputs cover the boundary limits of the design. A Magnetic Suspension System using Electromagnets Mounted in a Plannar Array is a design system that makes use of optimization modeling.

  20. Training a quantum optimizer

    NASA Astrophysics Data System (ADS)

    Wecker, Dave; Hastings, Matthew B.; Troyer, Matthias

    2016-08-01

    We study a variant of the quantum approximate optimization algorithm [E. Farhi, J. Goldstone, and S. Gutmann, arXiv:1411.4028] with a slightly different parametrization and a different objective: rather than looking for a state which approximately solves an optimization problem, our goal is to find a quantum algorithm that, given an instance of the maximum 2-satisfiability problem (MAX-2-SAT), will produce a state with high overlap with the optimal state. Using a machine learning approach, we chose a "training set" of instances and optimized the parameters to produce a large overlap for the training set. We then tested these optimized parameters on a larger instance set. As a training set, we used a subset of the hard instances studied by Crosson, Farhi, C. Y.-Y. Lin, H.-H. Lin, and P. Shor (CFLLS) (arXiv:1401.7320). When tested, on the full set, the parameters that we find produce a significantly larger overlap than the optimized annealing times of CFLLS. Testing on other random instances from 20 to 28 bits continues to show improvement over annealing, with the improvement being most notable on the hardest instances. Further tests on instances of MAX-3-SAT also showed improvement on the hardest instances. This algorithm may be a possible application for near-term quantum computers with limited coherence times.

  1. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-09-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. This quarter, an empirical model for predicting pressure drop across a cyclone was developed through a statistical analysis of pressure drop data for 98 cyclone designs. The model is shown to perform better than the pressure drop models of First (1950), Alexander (1949), Barth (1956), Stairmand (1949), and Shepherd-Lapple (1940). This model is used with the efficiency model of Iozia and Leith (1990) to develop an optimization curve which predicts the minimum pressure drop and the dimension rations of the optimized cyclone for a given aerodynamic cut diameter, d{sub 50}. The effect of variation in cyclone height, cyclone diameter, and flow on the optimization curve is determined. The optimization results are used to develop a design procedure for optimized cyclones. 37 refs., 10 figs., 4 tabs.

  2. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  3. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  4. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are

  5. On optimal Bayes detection

    SciTech Connect

    Nielsen, P. |

    1991-08-12

    The following is intended to be a short introduction to the design and analysis of a Bayes-optimal detector, and Middleton`s Locally Optimum Bayes Detector (LOBD). The relationship between these two detectors is clarified. There are three examples of varying complexity included to illustrate the design of these detectors. The final example illustrates the difficulty involved in choosing the bias function for the LOBD. For the examples, the corrupting noise is Gaussian. This allows for a relatively easy solution to the optimal and the LOBD structures. As will be shown, for Bayes detection, the threshold is determined by the costs associated with making a decision and the a priori probabilities of each hypothesis. The threshold of the test cannot be set by simulation. One will notice that the optimal Bayes detector and the LOBD look very much like the Neyman-Pearson optimal and locally optimal detectors respectively. In the latter cases though, the threshold is set by a constraint on the false alarm probability. Note that this allows the threshold to be set by simulation.

  6. On optimal Bayes detection

    SciTech Connect

    Nielsen, P. Arizona Univ., Tucson, AZ . Dept. of Electrical and Computer Engineering)

    1991-08-12

    The following is intended to be a short introduction to the design and analysis of a Bayes-optimal detector, and Middleton's Locally Optimum Bayes Detector (LOBD). The relationship between these two detectors is clarified. There are three examples of varying complexity included to illustrate the design of these detectors. The final example illustrates the difficulty involved in choosing the bias function for the LOBD. For the examples, the corrupting noise is Gaussian. This allows for a relatively easy solution to the optimal and the LOBD structures. As will be shown, for Bayes detection, the threshold is determined by the costs associated with making a decision and the a priori probabilities of each hypothesis. The threshold of the test cannot be set by simulation. One will notice that the optimal Bayes detector and the LOBD look very much like the Neyman-Pearson optimal and locally optimal detectors respectively. In the latter cases though, the threshold is set by a constraint on the false alarm probability. Note that this allows the threshold to be set by simulation.

  7. Optimization of Metronidazole Emulgel

    PubMed Central

    Rao, Monica; Sukre, Girish; Aghav, Sheetal; Kumar, Manmeet

    2013-01-01

    The purpose of the present study was to develop and optimize the emulgel system for MTZ (Metronidazole), a poorly water soluble drug. The pseudoternary phase diagrams were developed for various microemulsion formulations composed of Capmul 908 P, Acconon MC8-2, and propylene glycol. The emulgel was optimized using a three-factor, two-level factorial design, the independent variables selected were Capmul 908 P, and surfactant mixture (Acconon MC8-2 and gelling agent), and the dependent variables (responses) were a cumulative amount of drug permeated across the dialysis membrane in 24 h (Y 1) and spreadability (Y 2). Mathematical equations and response surface plots were used to relate the dependent and independent variables. The regression equations were generated for responses Y 1 and Y 2. The statistical validity of the polynomials was established, and optimized formulation factors were selected. Validation of the optimization study with 3 confirmatory runs indicated a high degree of prognostic ability of response surface methodology. Emulgel system of MTZ was developed and optimized using 23 factorial design and could provide an effective treatment against topical infections. PMID:26555982

  8. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  9. Optimization of Heat Exchangers

    SciTech Connect

    Ivan Catton

    2010-10-01

    The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.

  10. A linearly approximated iterative Gaussian decomposition method for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Mountrakis, Giorgos; Li, Yuguang

    2017-07-01

    Full-waveform LiDAR (FWL) decomposition results often act as the basis for key LiDAR-derived products, for example canopy height, biomass and carbon pool estimation, leaf area index calculation and under canopy detection. To date, the prevailing method for FWL product creation is the Gaussian Decomposition (GD) based on a non-linear Levenberg-Marquardt (LM) optimization for Gaussian node parameter estimation. GD follows a ;greedy; approach that may leave weak nodes undetected, merge multiple nodes into one or separate a noisy single node into multiple ones. In this manuscript, we propose an alternative decomposition method called Linearly Approximated Iterative Gaussian Decomposition (LAIGD method). The novelty of the LAIGD method is that it follows a multi-step ;slow-and-steady; iterative structure, where new Gaussian nodes are quickly discovered and adjusted using a linear fitting technique before they are forwarded for a non-linear optimization. Two experiments were conducted, one using real full-waveform data from NASA's land, vegetation, and ice sensor (LVIS) and another using synthetic data containing different number of nodes and degrees of overlap to assess performance in variable signal complexity. LVIS data revealed considerable improvements in RMSE (44.8% lower), RSE (56.3% lower) and rRMSE (74.3% lower) values compared to the benchmark GD method. These results were further confirmed with the synthetic data. Furthermore, the proposed multi-step method reduces execution times in half, an important consideration as there are plans for global coverage with the upcoming Global Ecosystem Dynamics Investigation LiDAR sensor on the International Space Station.

  11. Optimally combined confidence limits

    NASA Astrophysics Data System (ADS)

    Janot, P.; Le Diberder, F.

    1998-02-01

    An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.

  12. Optimal Composite Curing System

    NASA Astrophysics Data System (ADS)

    Handel, Paul; Guerin, Daniel

    The Optimal Composite Curing System (OCCS) is an intelligent control system which incorporates heat transfer and resin kinetic models coupled with expert knowledge. It controls the curing of epoxy impregnated composites, preventing part overheating while maintaining maximum cure heatup rate. This results in a significant reduction in total cure time over standard methods. The system uses a cure process model, operating in real-time, to determine optimal cure profiles for tool/part configurations of varying thermal characteristics. These profiles indicate the heating and cooling necessary to insure a complete cure of each part in the autoclave in the minimum amount of time. The system coordinates these profiles to determine an optimal cure profile for a batch of thermally variant parts. Using process specified rules for proper autoclave operation, OCCS automatically controls the cure process, implementing the prescribed cure while monitoring the operation of the autoclave equipment.

  13. Reverse Osmosis Optimization

    SciTech Connect

    2013-08-01

    This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.

  14. Reverse Osmosis Optimization

    SciTech Connect

    McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.

    2013-08-26

    This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬

  15. Optimizing turning for locomotion

    NASA Astrophysics Data System (ADS)

    Burton, Lisa; Hatton, Ross; Choset, Howie; Hosoi, A. E.

    2012-02-01

    Speed and efficiency are common and often adequate metrics to compare locomoting systems. These metrics, however, fail to account for a system's ability to turn, a key component in a system's ability to move a confined environment and an important factor in optimal motion planning. To explore turning strokes for a locomoting system, we develop a kinematic model to relate a system's shape configuration to its external velocity. We exploit this model to visualize the dynamics of the system and determine optimal strokes for multiple systems, including low Reynolds number swimmers and biological systems dominated by inertia. Understanding how shape configurations are related to external velocities enables a better understanding of biological and man made systems. Using these tools, we can justify biological system motion and determine optimal shape configurations for robots to maneuver through difficult environments.

  16. Optimal three finger grasps

    NASA Technical Reports Server (NTRS)

    Demmel, J.; Lafferriere, G.

    1989-01-01

    Consideration is given to the problem of optimal force distribution among three point fingers holding a planar object. A scheme that reduces the nonlinear optimization problem to an easily solved generalized eigenvalue problem is proposed. This scheme generalizes and simplifies results of Ji and Roth (1988). The generalizations include all possible geometric arrangements and extensions to three dimensions and to the case of variable coefficients of friction. For the two-dimensional case with constant coefficients of friction, it is proved that, except for some special cases, the optimal grasping forces (in the sense of minimizing the dependence on friction) are those for which the angles with the corresponding normals are all equal (in absolute value).

  17. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  18. Optimization and phenotype allocation.

    PubMed

    Jost, Jürgen; Wang, Ying

    2014-01-01

    We study the phenotype allocation problem for the stochastic evolution of a multitype population in a random environment. Our underlying model is a multitype Galton–Watson branching process in a random environment. In the multitype branching model, different types denote different phenotypes of offspring, and offspring distributions denote the allocation strategies. Two possible optimization targets are considered: the long-term growth rate of the population conditioned on nonextinction, and the extinction probability of the lineage. In a simple and biologically motivated case, we derive an explicit formula for the long-term growth rate using the random Perron–Frobenius theorem, and we give an approximation to the extinction probability by a method similar to that developed by Wilkinson. Then we obtain the optimal strategies that maximize the long-term growth rate or minimize the approximate extinction probability, respectively, in a numerical example. It turns out that different optimality criteria can lead to different strategies.

  19. Mathematical Optimization Techniques

    NASA Technical Reports Server (NTRS)

    Bellman, R. (Editor)

    1963-01-01

    The papers collected in this volume were presented at the Symposium on Mathematical Optimization Techniques held in the Santa Monica Civic Auditorium, Santa Monica, California, on October 18-20, 1960. The objective of the symposium was to bring together, for the purpose of mutual education, mathematicians, scientists, and engineers interested in modern optimization techniques. Some 250 persons attended. The techniques discussed included recent developments in linear, integer, convex, and dynamic programming as well as the variational processes surrounding optimal guidance, flight trajectories, statistical decisions, structural configurations, and adaptive control systems. The symposium was sponsored jointly by the University of California, with assistance from the National Science Foundation, the Office of Naval Research, the National Aeronautics and Space Administration, and The RAND Corporation, through Air Force Project RAND.

  20. Discrete Variational Optimal Control

    NASA Astrophysics Data System (ADS)

    Jiménez, Fernando; Kobilarov, Marin; Martín de Diego, David

    2013-06-01

    This paper develops numerical methods for optimal control of mechanical systems in the Lagrangian setting. It extends the theory of discrete mechanics to enable the solutions of optimal control problems through the discretization of variational principles. The key point is to solve the optimal control problem as a variational integrator of a specially constructed higher dimensional system. The developed framework applies to systems on tangent bundles, Lie groups, and underactuated and nonholonomic systems with symmetries, and can approximate either smooth or discontinuous control inputs. The resulting methods inherit the preservation properties of variational integrators and result in numerically robust and easily implementable algorithms. Several theoretical examples and a practical one, the control of an underwater vehicle, illustrate the application of the proposed approach.

  1. Dose optimization tool

    NASA Astrophysics Data System (ADS)

    Amir, Ornit; Braunstein, David; Altman, Ami

    2003-05-01

    A dose optimization tool for CT scanners is presented using patient raw data to calculate noise. The tool uses a single patient image which is modified for various lower doses. Dose optimization is carried out without extra measurements by interactively visualizing the dose-induced changes in this image. This tool can be used either off line, on existing image(s) or, as a pre - requisite for dose optimization for the specific patient, during the patient clinical study. The algorithm of low-dose simulation consists of reconstruction of two images from a single measurement and uses those images to create the various lower dose images. This algorithm enables fast simulation of various low dose (mAs) images on a real patient image.

  2. Optimal symmetric flight studies

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Menon, P. K. A.; Bilimoria, K. D.; Cliff, E. M.; Kelley, H. J.

    1985-01-01

    Several topics in optimal symmetric flight of airbreathing vehicles are examined. In one study, an approximation scheme designed for onboard real-time energy management of climb-dash is developed and calculations for a high-performance aircraft presented. In another, a vehicle model intermediate in complexity between energy and point-mass models is explored and some quirks in optimal flight characteristics peculiar to the model uncovered. In yet another study, energy-modelling procedures are re-examined with a view to stretching the range of validity of zeroth-order approximation by special choice of state variables. In a final study, time-fuel tradeoffs in cruise-dash are examined for the consequences of nonconvexities appearing in the classical steady cruise-dash model. Two appendices provide retrospective looks at two early publications on energy modelling and related optimal control theory.

  3. Optimality in neuromuscular systems.

    PubMed

    Theodorou, Evangelos; Valero-Cuevas, Francisco J

    2010-01-01

    We provide an overview of optimal control methods to nonlinear neuromuscular systems and discuss their limitations. Moreover we extend current optimal control methods to their application to neuromuscular models with realistically numerous musculotendons; as most prior work is limited to torque-driven systems. Recent work on computational motor control has explored the used of control theory and estimation as a conceptual tool to understand the underlying computational principles of neuromuscular systems. After all, successful biological systems regularly meet conditions for stability, robustness and performance for multiple classes of complex tasks. Among a variety of proposed control theory frameworks to explain this, stochastic optimal control has become a dominant framework to the point of being a standard computational technique to reproduce kinematic trajectories of reaching movements (see [12]) In particular, we demonstrate the application of optimal control to a neuromuscular model of the index finger with all seven musculotendons producing a tapping task. Our simulations include 1) a muscle model that includes force- length and force-velocity characteristics; 2) an anatomically plausible biomechanical model of the index finger that includes a tendinous network for the extensor mechanism and 3) a contact model that is based on a nonlinear spring-damper attached at the end effector of the index finger. We demonstrate that it is feasible to apply optimal control to systems with realistically large state vectors and conclude that, while optimal control is an adequate formalism to create computational models of neuro-musculoskeletal systems, there remain important challenges and limitations that need to be considered and overcome such as contact transitions, curse of dimensionality, and constraints on states and controls.

  4. Optimal Quantum Phase Estimation

    SciTech Connect

    Dorner, U.; Smith, B. J.; Lundeen, J. S.; Walmsley, I. A.; Demkowicz-Dobrzanski, R.; Banaszek, K.; Wasilewski, W.

    2009-01-30

    By using a systematic optimization approach, we determine quantum states of light with definite photon number leading to the best possible precision in optical two-mode interferometry. Our treatment takes into account the experimentally relevant situation of photon losses. Our results thus reveal the benchmark for precision in optical interferometry. Although this boundary is generally worse than the Heisenberg limit, we show that the obtained precision beats the standard quantum limit, thus leading to a significant improvement compared to classical interferometers. We furthermore discuss alternative states and strategies to the optimized states which are easier to generate at the cost of only slightly lower precision.

  5. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  6. Terascale Optimal PDE Simulations

    SciTech Connect

    David Keyes

    2009-07-28

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  7. Neural wiring optimization.

    PubMed

    Cherniak, Christopher

    2012-01-01

    Combinatorial network optimization theory concerns minimization of connection costs among interconnected components in systems such as electronic circuits. As an organization principle, similar wiring minimization can be observed at various levels of nervous systems, invertebrate and vertebrate, including primate, from placement of the entire brain in the body down to the subcellular level of neuron arbor geometry. In some cases, the minimization appears either perfect, or as good as can be detected with current methods. One question such best-of-all-possible-brains results raise is, what is the map of such optimization, does it have a distinct neural domain?

  8. Optimized solar module design

    NASA Technical Reports Server (NTRS)

    Santala, T.; Sabol, R.; Carbajal, B. G.

    1978-01-01

    The minimum cost per unit of power output from flat plate solar modules can most likely be achieved through efficient packaging of higher efficiency solar cells. This paper outlines a module optimization method which is broadly applicable, and illustrates the potential results achievable from a specific high efficiency tandem junction (TJ) cell. A mathematical model is used to assess the impact of various factors influencing the encapsulated cell and packing efficiency. The optimization of the packing efficiency is demonstrated. The effect of encapsulated cell and packing efficiency on the module add-on cost is shown in a nomograph form.

  9. Optimization of dental implantation

    NASA Astrophysics Data System (ADS)

    Dol, Aleksandr V.; Ivanov, Dmitriy V.

    2017-02-01

    Modern dentistry can not exist without dental implantation. This work is devoted to study of the "bone-implant" system and to optimization of dental prostheses installation. Modern non-invasive methods such as MRI an 3D-scanning as well as numerical calculations and 3D-prototyping allow to optimize all of stages of dental prosthetics. An integrated approach to the planning of implant surgery can significantly reduce the risk of complications in the first few days after treatment, and throughout the period of operation of the prosthesis.

  10. Multidisciplinary design and optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1992-01-01

    Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. This paper outlines techniques for computing these influences as system design derivatives useful to both judgmental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering optimizations and incorporate their design tools.

  11. Optimal exploration systems

    NASA Astrophysics Data System (ADS)

    Klesh, Andrew T.

    This dissertation studies optimal exploration, defined as the collection of information about given objects of interest by a mobile agent (the explorer) using imperfect sensors. The key aspects of exploration are kinematics (which determine how the explorer moves in response to steering commands), energetics (which determine how much energy is consumed by motion and maneuvers), informatics (which determine the rate at which information is collected) and estimation (which determines the states of the objects). These aspects are coupled by the steering decisions of the explorer. We seek to improve exploration by finding trade-offs amongst these couplings and the components of exploration: the Mission, the Path and the Agent. A comprehensive model of exploration is presented that, on one hand, accounts for these couplings and on the other hand is simple enough to allow analysis. This model is utilized to pose and solve several exploration problems where an objective function is to be minimized. Specific functions to be considered are the mission duration and the total energy. These exploration problems are formulated as optimal control problems and necessary conditions for optimality are obtained in the form of two-point boundary value problems. An analysis of these problems reveals characteristics of optimal exploration paths. Several regimes are identified for the optimal paths including the Watchtower, Solar and Drag regime, and several non-dimensional parameters are derived that determine the appropriate regime of travel. The so-called Power Ratio is shown to predict the qualitative features of the optimal paths, provide a metric to evaluate an aircrafts design and determine an aircrafts capability for flying perpetually. Optimal exploration system drivers are identified that provide perspective as to the importance of these various regimes of flight. A bank-to-turn solar-powered aircraft flying at constant altitude on Mars is used as a specific platform for

  12. Optimization of solid-phase extraction for the liquid chromatography-mass spectrometry analysis of harpagoside, 8-para-coumaroyl harpagide, and harpagide in equine plasma and urine.

    PubMed

    Colas, Cyril; Garcia, Patrice; Popot, Marie-Agnès; Bonnaire, Yves; Bouchonnet, Stéphane

    2008-02-01

    Solid-phase extraction cartridges among those usually used for screening in horse doping analyses are tested to optimize the extraction of harpagoside (HS), harpagide (HG), and 8-para-coumaroyl harpagide (8PCHG) from plasma and urine. Extracts are analyzed by liquid chromatography coupled with multi-step tandem mass spectrometry. The extraction process retained for plasma applies BondElut PPL cartridges and provides extraction recoveries between 91% and 93%, with RSD values between 8 and 13% at 0.5 ng/mL. Two different procedures are needed to extract analytes from urine. HS and 8PCHG are extracted using AbsElut Nexus cartridges, with recoveries of 85% and 77%, respectively (RSD between 7% and 19%). The extraction of HG involves the use of two cartridges: BondElut PPL and BondElut C18 HF, with recovery of 75% and RSD between 14% and 19%. The applicability of the extraction methods is determined on authentic equine plasma and urine samples after harpagophytum or harpagoside administration.

  13. Optimizing Conferencing Freeware

    ERIC Educational Resources Information Center

    Baggaley, Jon; Klaas, Jim; Wark, Norine; Depow, Jim

    2005-01-01

    The increasing range of options provided by two popular conferencing freeware products, "Yahoo Messenger" and "MSN Messenger," are discussed. Each tool contains features designed primarily for entertainment purposes, which can be customized for use in online education. This report provides suggestions for optimizing the educational potential of…

  14. Optimal Ski Jump

    ERIC Educational Resources Information Center

    Rebilas, Krzysztof

    2013-01-01

    Consider a skier who goes down a takeoff ramp, attains a speed "V", and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is [alpha]. What is the optimal angle [alpha] that makes the jump the longest possible for the fixed magnitude of the…

  15. Optimal Periodic Control Theory.

    DTIC Science & Technology

    1980-08-01

    are control variables. For many aircraft, this energy state space produces a hodograph which is not convex. The physical explanation for this is that...convexity in the hodograph and preserve an "optimal" steady-state cruise, Schultz and Zagalsky [61 revised the energy state model so that altitude becomes a

  16. Optimization in Ecology

    ERIC Educational Resources Information Center

    Cody, Martin L.

    1974-01-01

    Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)

  17. Optimal ciliary beating patterns

    NASA Astrophysics Data System (ADS)

    Vilfan, Andrej; Osterman, Natan

    2011-11-01

    We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.

  18. Optimal Ski Jump

    ERIC Educational Resources Information Center

    Rebilas, Krzysztof

    2013-01-01

    Consider a skier who goes down a takeoff ramp, attains a speed "V", and jumps, attempting to land as far as possible down the hill below (Fig. 1). At the moment of takeoff the angle between the skier's velocity and the horizontal is [alpha]. What is the optimal angle [alpha] that makes the jump the longest possible for the fixed magnitude of the…

  19. Optimizing Computer Technology Integration

    ERIC Educational Resources Information Center

    Dillon-Marable, Elizabeth; Valentine, Thomas

    2006-01-01

    The purpose of this study was to better understand what optimal computer technology integration looks like in adult basic skills education (ABSE). One question guided the research: How is computer technology integration best conceptualized and measured? The study used the Delphi method to map the construct of computer technology integration and…

  20. Is Optimism Real?

    ERIC Educational Resources Information Center

    Simmons, Joseph P.; Massey, Cade

    2012-01-01

    Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their…

  1. Optimization in Ecology

    ERIC Educational Resources Information Center

    Cody, Martin L.

    1974-01-01

    Discusses the optimality of natural selection, ways of testing for optimum solutions to problems of time - or energy-allocation in nature, optimum patterns in spatial distribution and diet breadth, and how best to travel over a feeding area so that food intake is maximized. (JR)

  2. Numerical-Optimization Program

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garret N.

    1991-01-01

    Automated Design Synthesis (ADS) computer program is general-purpose numerical-optimization program for design engineering. Provides wide range of options for solution of constrained and unconstrained function minimization problems. Suitable for such applications as minimum-weight design. Written in FORTRAN 77.

  3. Optimal Training Systems STTR

    DTIC Science & Technology

    2005-08-15

    instance- based modeling .............................. 8 Hum an Perform ance on the N M D Feedback Task ...optimal model, and therefore allows us to explore a normative modeling- based tutoring approach. In this task , trainees allocated some number of ground...add-on of refinements based on current research can enhance training. The NMD task is particularly appropriate for this purpose because framing effects

  4. Optimal Facility-Location.

    PubMed

    Goldman, A J

    2006-01-01

    Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall's research at NBS/NIST.

  5. Fourier Series Optimization Opportunity

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    This note discusses the introduction of Fourier series as an immediate application of optimization of a function of more than one variable. Specifically, it is shown how the study of Fourier series can be motivated to enrich a multivariable calculus class. This is done through discovery learning and use of technology wherein students build the…

  6. Optimization in Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Marsden, Alison L.

    2014-01-01

    Fluid mechanics plays a key role in the development, progression, and treatment of cardiovascular disease. Advances in imaging methods and patient-specific modeling now reveal increasingly detailed information about blood flow patterns in health and disease. Building on these tools, there is now an opportunity to couple blood flow simulation with optimization algorithms to improve the design of surgeries and devices, incorporating more information about the flow physics in the design process to augment current medical knowledge. In doing so, a major challenge is the need for efficient optimization tools that are appropriate for unsteady fluid mechanics problems, particularly for the optimization of complex patient-specific models in the presence of uncertainty. This article reviews the state of the art in optimization tools for virtual surgery, device design, and model parameter identification in cardiovascular flow and mechanobiology applications. In particular, it reviews trade-offs between traditional gradient-based methods and derivative-free approaches, as well as the need to incorporate uncertainties. Key future challenges are outlined, which extend to the incorporation of biological response and the customization of surgeries and devices for individual patients.

  7. Is Optimism Real?

    ERIC Educational Resources Information Center

    Simmons, Joseph P.; Massey, Cade

    2012-01-01

    Is optimism real, or are optimistic forecasts just cheap talk? To help answer this question, we investigated whether optimistic predictions persist in the face of large incentives to be accurate. We asked National Football League football fans to predict the winner of a single game. Roughly half (the partisans) predicted a game involving their…

  8. Optimization of digital designs

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling R. (Inventor); Miles, Lowell H. (Inventor)

    2009-01-01

    An application specific integrated circuit is optimized by translating a first representation of its digital design to a second representation. The second representation includes multiple syntactic expressions that admit a representation of a higher-order function of base Boolean values. The syntactic expressions are manipulated to form a third representation of the digital design.

  9. Optimization and Discrete Mathematics

    DTIC Science & Technology

    2012-03-06

    unlimited.. Another complex network Protein-protein interaction map of H. Pylori \\.J ••• • 38 DISTRIBUTION A: Approved for public release...distribution is unlimited.. Find substructures via fractional optimization – S. Butenko, Texas A&M H. Pylori – largest 2-club Yeast – largest 2

  10. Toward Optimal Transport Networks

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.

    2008-01-01

    Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.

  11. Fourier Series Optimization Opportunity

    ERIC Educational Resources Information Center

    Winkel, Brian

    2008-01-01

    This note discusses the introduction of Fourier series as an immediate application of optimization of a function of more than one variable. Specifically, it is shown how the study of Fourier series can be motivated to enrich a multivariable calculus class. This is done through discovery learning and use of technology wherein students build the…

  12. Orbital-Maneuver-Sequence Optimization

    DTIC Science & Technology

    1985-12-01

    optimization computer program and applied it to the generation of optimal cog-brbital attack4ianeuver sequences * and to the generation of optimal evasions...maneuver-sequence- optimization computer programs can be improved by a general restructuring and streamlining and the addition of various features. It is...believed that with further development and systematic testing the programs have potential for real-time generation of optimal maneuver sequences in an

  13. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed

  14. (Too) optimistic about optimism: the belief that optimism improves performance.

    PubMed

    Tenney, Elizabeth R; Logg, Jennifer M; Moore, Don A

    2015-03-01

    A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person's actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success-unfortunately, people may be overly optimistic about just how much optimism can do. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  15. The optimal elastic flagellum

    NASA Astrophysics Data System (ADS)

    Spagnolie, Saverio E.; Lauga, Eric

    2010-03-01

    Motile eukaryotic cells propel themselves in viscous fluids by passing waves of bending deformation down their flagella. An infinitely long flagellum achieves a hydrodynamically optimal low-Reynolds number locomotion when the angle between its local tangent and the swimming direction remains constant along its length. Optimal flagella therefore adopt the shape of a helix in three dimensions (smooth) and that of a sawtooth in two dimensions (nonsmooth). Physically, biological organisms (or engineered microswimmers) must expend internal energy in order to produce the waves of deformation responsible for the motion. Here we propose a physically motivated derivation of the optimal flagellum shape. We determine analytically and numerically the shape of the flagellar wave which leads to the fastest swimming for a given appropriately defined energetic expenditure. Our novel approach is to define an energy which includes not only the work against the surrounding fluid, but also (1) the energy stored elastically in the bending of the flagellum, (2) the energy stored elastically in the internal sliding of the polymeric filaments which are responsible for the generation of the bending waves (microtubules), and (3) the viscous dissipation due to the presence of an internal fluid. This approach regularizes the optimal sawtooth shape for two-dimensional deformation at the expense of a small loss in hydrodynamic efficiency. The optimal waveforms of finite-size flagella are shown to depend on a competition between rotational motions and bending costs, and we observe a surprising bias toward half-integer wave numbers. Their final hydrodynamic efficiencies are above 6%, significantly larger than those of swimming cells, therefore indicating available room for further biological tuning.

  16. An optimal structural design algorithm using optimality criteria

    NASA Technical Reports Server (NTRS)

    Taylor, J. E.; Rossow, M. P.

    1976-01-01

    An algorithm for optimal design is given which incorporates several of the desirable features of both mathematical programming and optimality criteria, while avoiding some of the undesirable features. The algorithm proceeds by approaching the optimal solution through the solutions of an associated set of constrained optimal design problems. The solutions of the constrained problems are recognized at each stage through the application of optimality criteria based on energy concepts. Two examples are described in which the optimal member size and layout of a truss is predicted, given the joint locations and loads.

  17. Combinatorial optimization games

    SciTech Connect

    Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi

    1997-06-01

    We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic and complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.

  18. Optimality of Gaussian Discord

    NASA Astrophysics Data System (ADS)

    Pirandola, Stefano; Spedalieri, Gaetana; Braunstein, Samuel L.; Cerf, Nicolas J.; Lloyd, Seth

    2014-10-01

    In this Letter we exploit the recently solved conjecture on the bosonic minimum output entropy to show the optimality of Gaussian discord, so that the computation of quantum discord for bipartite Gaussian states can be restricted to local Gaussian measurements. We prove such optimality for a large family of Gaussian states, including all two-mode squeezed thermal states, which are the most typical Gaussian states realized in experiments. Our family also includes other types of Gaussian states and spans their entire set in a suitable limit where they become Choi matrices of Gaussian channels. As a result, we completely characterize the quantum correlations possessed by some of the most important bosonic states in quantum optics and quantum information.

  19. Cyclone performance and optimization

    SciTech Connect

    Leith, D.

    1990-06-15

    The objectives of this project are: to characterize the gas flow pattern within cyclones, to revise the theory for cyclone performance on the basis of these findings, and to design and test cyclones whose dimensions have been optimized using revised performance theory. This work is important because its successful completion will aid in the technology for combustion of coal in pressurized, fluidized beds. During the past quarter, we have nearly completed modeling work that employs the flow field measurements made during the past six months. In addition, we have begun final work using the results of this project to develop improved design methods for cyclones. This work involves optimization using the Iozia-Leith efficiency model and the Dirgo pressure drop model. This work will be completed this summer. 9 figs.

  20. The Optimal Prediction method

    SciTech Connect

    Burin des Roziers, Thibaut

    1999-08-01

    The purpose of the work is to test and show how well the numerical method called Optima Prediction works. This method is relatively new and only a few experiment have been made. The authors first did a series of simple tests to see how the method behaves. In order to have a better understanding of the method, they then reproduced one of the main experiment which was done about Optimal Prediction by Kupferman. Once they obtained the same results that Kupferman had, they changed a few parameters to see how dependant the method was on this parameters. In this paper, they will present all the tests they made, the results they obtained and what they concluded about the method. Before talking about the experiments, they have to explain what is the Optimal Prediction method and how does it work. This will be done in the first section of this paper.

  1. NEMO Oceanic Model Optimization

    NASA Astrophysics Data System (ADS)

    Epicoco, I.; Mocavero, S.; Murli, A.; Aloisio, G.

    2012-04-01

    NEMO is an oceanic model used by the climate community for stand-alone or coupled experiments. Its parallel implementation, based on MPI, limits the exploitation of the emerging computational infrastructures at peta and exascale, due to the weight of communications. As case study we considered the MFS configuration developed at INGV with a resolution of 1/16° tailored on the Mediterranenan Basin. The work is focused on the analysis of the code on the MareNostrum cluster and on the optimization of critical routines. The first performance analysis of the model aimed at establishing how much the computational performance are influenced by the GPFS file system or the local disks and wich is the best domain decomposition. The results highlight that the exploitation of local disks can reduce the wall clock time up to 40% and that the best performance is achieved with a 2D decomposition when the local domain has a square shape. A deeper performance analysis highlights the obc_rad, dyn_spg and tra_adv routines are the most time consuming routines. The obc_rad implements the evaluation of the open boundaries and it has been the first routine to be optimized. The communication pattern implemented in obc_rad routine has been redesigned. Before the introduction of the optimizations all processes were involved in the communication, but only the processes on the boundaries have the actual data to be exchanged and only the data on the boundaries must be exchanged. Moreover the data along the vertical levels are "packed" and sent with only one MPI_send invocation. The overall efficiency increases compared with the original version, as well as the parallel speed-up. The execution time was reduced of about 33.81%. The second phase of optimization involved the SOR solver routine, implementing the Red-Black Successive-Over-Relaxation method. The high frequency of exchanging data among processes represent the most part of the overall communication time. The number of communication is

  2. Heliostat cost optimization study

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus

    2016-05-01

    This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.

  3. Optimal Behavioral Hierarchy

    PubMed Central

    Córdova, Natalia; Yee, Debbie; Barto, Andrew G.; Niv, Yael; Botvinick, Matthew M.

    2014-01-01

    Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily. However, these payoffs depend on the particular way in which actions are organized into a hierarchy, the specific way in which tasks are carved up into subtasks. We provide a mathematical account for what makes some hierarchies better than others, an account that allows an optimal hierarchy to be identified for any set of tasks. We then present results from four behavioral experiments, suggesting that human learners spontaneously discover optimal action hierarchies. PMID:25122479

  4. Optimization of Anguilliform Swimming

    NASA Astrophysics Data System (ADS)

    Kern, Stefan; Koumoutsakos, Petros

    2006-03-01

    Anguilliform swimming is investigated by 3D computer simulations coupling the dynamics of an undulating eel-like body with the surrounding viscous fluid flow. The body is self-propelled and, in contrast to previous computational studies of swimming, the motion pattern is not prescribed a priori but obtained by an evolutionary optimization procedure. Two different objective functions are used to characterize swimming efficiency and maximum swimming velocity with limited input power. The found optimal motion patterns represent two distinct swimming modes corresponding to migration, and burst swimming, respectively. The results support the hypothesis from observations of real animals that eels can modify their motion pattern generating wakes that reflect their propulsive mode. Unsteady drag and thrust production of the swimming body are thoroughly analyzed by recording the instantaneous fluid forces acting on partitions of the body surface.

  5. EAF Management Optimization

    NASA Astrophysics Data System (ADS)

    Costoiu, M.; Ioana, A.; Semenescu, A.; Marcu, D.

    2016-11-01

    The article presents the main advantages of electric arc furnace (EAF): it has a great contribution to reintroduce significant quantities of reusable metallic materials in the economic circuit, it constitutes itself as an important part in the Primary Materials and Energy Recovery (PMER), good productivity, good quality / price ratio, the possibility of developing a wide variety of classes and types of steels, including special steels and high alloy. In this paper it is presented some important developments of electric arc furnace: vacuum electric arc furnace, artificial intelligence expert systems for pollution control Steelworks. Another important aspect presented in the article is an original block diagram for optimization the EAF management system. This scheme is based on the original objective function (criterion function) represented by the price / quality ratio. The article presents an original block diagram for optimization the control system of the EAF. For designing this concept of EAF management system, many principles were used.

  6. Optimal Electric Utility Expansion

    SciTech Connect

    1989-10-10

    SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansion configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.

  7. Topology optimized permanent magnet systems

    NASA Astrophysics Data System (ADS)

    Bjørk, R.; Bahl, C. R. H.; Insinga, A. R.

    2017-09-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a Λcool figure of merit of 0.472 is reached, which is an increase of 100% compared to a previous optimized design.

  8. Trajectory Optimization: OTIS 4

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.; Falck, Robert D.; Paris, Stephen W.

    2010-01-01

    The latest release of the Optimal Trajectories by Implicit Simulation (OTIS4) allows users to simulate and optimize aerospace vehicle trajectories. With OTIS4, one can seamlessly generate optimal trajectories and parametric vehicle designs simultaneously. New features also allow OTIS4 to solve non-aerospace continuous time optimal control problems. The inputs and outputs of OTIS4 have been updated extensively from previous versions. Inputs now make use of objectoriented constructs, including one called a metastring. Metastrings use a greatly improved calculator and common nomenclature to reduce the user s workload. They allow for more flexibility in specifying vehicle physical models, boundary conditions, and path constraints. The OTIS4 calculator supports common mathematical functions, Boolean operations, and conditional statements. This allows users to define their own variables for use as outputs, constraints, or objective functions. The user-defined outputs can directly interface with other programs, such as spreadsheets, plotting packages, and visualization programs. Internally, OTIS4 has more explicit and implicit integration procedures, including high-order collocation methods, the pseudo-spectral method, and several variations of multiple shooting. Users may switch easily between the various methods. Several unique numerical techniques such as automated variable scaling and implicit integration grid refinement, support the integration methods. OTIS4 is also significantly more user friendly than previous versions. The installation process is nearly identical on various platforms, including Microsoft Windows, Apple OS X, and Linux operating systems. Cross-platform scripts also help make the execution of OTIS and post-processing of data easier. OTIS4 is supplied free by NASA and is subject to ITAR (International Traffic in Arms Regulations) restrictions. Users must have a Fortran compiler, and a Python interpreter is highly recommended.

  9. Optimized lithium oxyhalide cells

    NASA Astrophysics Data System (ADS)

    Kilroy, W. P.; Schlaikjer, C.; Polsonetti, P.; Jones, M.

    1993-04-01

    Lithium thionyl chloride cells were optimized with respect to electrolyte and carbon cathode composition. Wound 'C-size' cells with various mixtures of Chevron acetylene black with Ketjenblack EC-300J and containing various concentrations of LiAlCl4 and derivatives, LiGaCl4, and mixtures of SOCl2 and SO2Cl2 were evaluated as a function of discharge rate, temperature, and storage condition.

  10. Optimal Natural Frames

    DTIC Science & Technology

    2010-04-01

    reduced hamiltonian (4.5) is conserved, as is the Casimir c = 1 2 ( µ21 + µ 2 2 + µ 2 3 ) . (4.9) Conservation of h and c imply that 2(c− h) = µ23 is...and the two Casimir functions c1 = 1 2 ( µ24 + µ 2 5 + µ 2 6 ) , c2 = µ1µ6 + µ2µ5 + µ3µ4. (4.19) As in the optimal control problem on SO(3

  11. HOMER® Micropower Optimization Model

    SciTech Connect

    Lilienthal, P.

    2005-01-01

    NREL has developed the HOMER micropower optimization model. The model can analyze all of the available small power technologies individually and in hybrid configurations to identify least-cost solutions to energy requirements. This capability is valuable to a diverse set of energy professionals and applications. NREL has actively supported its growing user base and developed training programs around the model. These activities are helping to grow the global market for solar technologies.

  12. Hydrodynamic Design Optimization Tool

    DTIC Science & Technology

    2011-08-01

    appreciated. The authors would also like to thank David Walden and Francis Noblesse of Code 50 for being instrumental in defining this project, Wesley...and efficiently during the early stage of the design process. The Computational Fluid Dynamics ( CFD ) group at George Mason University has an...specific design constraints. In order to apply CFD -based tool to the hydrodynamic design optimization of ship hull forms, an initial hull form is

  13. Force Method Optimization.

    DTIC Science & Technology

    1980-02-01

    The resulting problem is non-linear, but the use of a linear programming stage is effective in DD IO 1473 EDITION oF I NOV GSIS OSOLETE UNCLASSIFIED...programming techniques reached what was effectively a computational stalemate, the development of optimality criteria methods(’) in the early 70’s appeared to...constraints. In addition, the incorporation of stress and fabricational constraints is effectively based upon the FSD method. Work has been carried on by a

  14. Optimal Centroid Position Estimation

    SciTech Connect

    Candy, J V; McClay, W A; Awwal, A S; Ferguson, S W

    2004-07-23

    The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing online optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.

  15. Optimal Facility-Location

    PubMed Central

    Goldman, A. J.

    2006-01-01

    Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920

  16. DARPA DICE Manufacturing Optimization

    DTIC Science & Technology

    1994-01-01

    product and process domains. The system will support Design for Manufacturing and Assembly ( DFMA ) with a set of tools to model manufacturing processes, and...concurrently in the product and process domains. The system will support DFMA with a set of tools to model manufacturing processes, and manage tradeoffs across... DFMA Design for Manufacturing and Assembly DICE DARPA Initiative In Concurrent Engineering MO Manufacturing Optimization 5 MSD Missile Systems Division

  17. Singularity in structural optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Guptill, J. D.; Berke, L.

    1993-01-01

    The conditions under which global and local singularities may arise in structural optimization are examined. Examples of these singularities are presented, and a framework is given within which the singularities can be recognized. It is shown, in particular, that singularities can be identified through the analysis of stress-displacement relations together with compatibility conditions or the displacement-stress relations derived by the integrated force method of structural analysis. Methods of eliminating the effects of singularities are suggested and illustrated numerically.

  18. Center for Parallel Optimization

    DTIC Science & Technology

    1993-09-30

    34, University of Wisconsin Computer Sciences Technical Report # 998, 1991, to appear, Linear Algebra and Its Applications. 29. K.P. Bennett & O.L...Robust linear programming discrimination of two lineally inseparable sets, Optimization Methods and Software 1, 1992, 23-34. 4. M.C. Ferris and O.L...variational in equality problems. Linear Algebra and Its Applications 174, 1992, 153-164. 9. O.L. Mangasarian and R.R. Meyer, Proceedings of the

  19. Fault Tolerant Optimal Control.

    DTIC Science & Technology

    1982-08-01

    i k+l since the cost to be minimized in (D.2.3) increases withXk (for fixed xsk). When we have b k _ x~ ji ] Aj M 2a(j) R(j) x bOk +l x]rkt] -b (j...22, pp. 236-239. 69. D.D.Sworder and L.L. Choi (1976): Stationary Cost Densities for Optimally Controlled Stochastic Systems, IEEE Trans. Automatic

  20. Optimize acid gas removal

    SciTech Connect

    Nicholas, D.M.; Wilkins, J.T.

    1983-09-01

    Innovative design of physical solvent plants for acid gas removal can materially reduce both installation and operating costs. A review of the design considerations for one physical solvent process (Selexol) points to numerous arrangements for potential improvement. These are evaluated for a specific case in four combinations that identify an optimum for the case in question but, more importantly, illustrate the mechanism for use for such optimization elsewhere.