Sample records for length minimization problem

  1. On the Minimal Length Uncertainty Relation and the Foundations of String Theory

    DOE PAGES

    Chang, Lay Nam; Lewis, Zachary; Minic, Djordje; ...

    2011-01-01

    We review our work on the minimal length uncertainty relation as suggested by perturbative string theory. We discuss simple phenomenological implications of the minimal length uncertainty relation and then argue that the combination of the principles of quantum theory and general relativity allow for a dynamical energy-momentum space. We discuss the implication of this for the problem of vacuum energy and the foundations of nonperturbative string theory.

  2. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  3. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less

  4. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1989-01-01

    Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.

  5. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    NASA Astrophysics Data System (ADS)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  6. Casual Set Approach to a Minimal Invariant Length

    NASA Astrophysics Data System (ADS)

    Raut, Usha

    2007-04-01

    Any attempt to quantize gravity would necessarily introduce a minimal observable length scale of the order of the Planck length. This conclusion is based on several different studies and thought experiments and appears to be an inescapable feature of all quantum gravity theories, irrespective of the method used to quantize gravity. Over the last few years there has been growing concern that such a minimal length might lead to a contradiction with the basic postulates of special relativity, in particular the Lorentz-Fitzgerald contraction. A few years ago, Rovelli et.al, attempted to reconcile an invariant minimal length with Special Relativity, using the framework of loop quantum gravity. However, the inherently canonical formalism of the loop quantum approach is plagued by a variety of problems, many brought on by separation of space and time co-ordinates. In this paper we use a completely different approach. Using the framework of the causal set paradigm, along with a statistical measure of closeness between Lorentzian manifolds, we re-examine the issue of introducing a minimal observable length that is not at odds with Special Relativity postulates.

  7. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  8. Minimal Solutions to the Box Problem

    ERIC Educational Resources Information Center

    Chuang, Jer-Chin

    2009-01-01

    The "box problem" from introductory calculus seeks to maximize the volume of a tray formed by folding a strictly rectangular sheet from which identical squares have been cut from each corner. In posing such questions, one would like to choose integral side-lengths for the sheet so that the excised squares have rational or integral side-length.…

  9. Minimum energy information fusion in sensor networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapline, G

    1999-05-11

    In this paper we consider how to organize the sharing of information in a distributed network of sensors and data processors so as to provide explanations for sensor readings with minimal expenditure of energy. We point out that the Minimum Description Length principle provides an approach to information fusion that is more naturally suited to energy minimization than traditional Bayesian approaches. In addition we show that for networks consisting of a large number of identical sensors Kohonen self-organization provides an exact solution to the problem of combing the sensor outputs into minimal description length explanations.

  10. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Padula, Sharon L.

    1990-01-01

    Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.

  11. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  12. On the heteroclinic connection problem for multi-well gradient systems

    NASA Astrophysics Data System (ADS)

    Zuniga, Andres; Sternberg, Peter

    2016-10-01

    We revisit the existence problem of heteroclinic connections in RN associated with Hamiltonian systems involving potentials W :RN → R having several global minima. Under very mild assumptions on W we present a simple variational approach to first find geodesics minimizing length of curves joining any two of the potential wells, where length is computed with respect to a degenerate metric having conformal factor √{ W}. Then we show that when such a minimizing geodesic avoids passing through other wells of the potential at intermediate times, it gives rise to a heteroclinic connection between the two wells. This work improves upon the approach of [22] and represents a more geometric alternative to the approaches of e.g. [5,10,14,17] for finding such connections.

  13. Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra

    The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.

  14. Traveling salesman problem, conformal invariance, and dense polymers.

    PubMed

    Jacobsen, J L; Read, N; Saleur, H

    2004-07-16

    We propose that the statistics of the optimal tour in the planar random Euclidean traveling salesman problem is conformally invariant on large scales. This is exhibited in the power-law behavior of the probabilities for the tour to zigzag repeatedly between two regions, and in subleading corrections to the length of the tour. The universality class should be the same as for dense polymers and minimal spanning trees. The conjectures for the length of the tour on a cylinder are tested numerically.

  15. Multidimensional spectral load balancing

    DOEpatents

    Hendrickson, Bruce A.; Leland, Robert W.

    1996-12-24

    A method of and apparatus for graph partitioning involving the use of a plurality of eigenvectors of the Laplacian matrix of the graph of the problem for which load balancing is desired. The invention is particularly useful for optimizing parallel computer processing of a problem and for minimizing total pathway lengths of integrated circuits in the design stage.

  16. Association Fields via Cuspless Sub-Riemannian Geodesics in SE(2).

    PubMed

    Duits, R; Boscain, U; Rossi, F; Sachkov, Y

    To model association fields that underly perceptional organization (gestalt) in psychophysics we consider the problem P curve of minimizing [Formula: see text] for a planar curve having fixed initial and final positions and directions. Here κ ( s ) is the curvature of the curve with free total length ℓ . This problem comes from a model of geometry of vision due to Petitot (in J. Physiol. Paris 97:265-309, 2003; Math. Inf. Sci. Humaines 145:5-101, 1999), and Citti & Sarti (in J. Math. Imaging Vis. 24(3):307-326, 2006). In previous work we proved that the range [Formula: see text] of the exponential map of the underlying geometric problem formulated on SE(2) consists of precisely those end-conditions ( x fin , y fin , θ fin ) that can be connected by a globally minimizing geodesic starting at the origin ( x in , y in , θ in )=(0,0,0). From the applied imaging point of view it is relevant to analyze the sub-Riemannian geodesics and [Formula: see text] in detail. In this article we show that [Formula: see text] is contained in half space x ≥0 and (0, y fin )≠(0,0) is reached with angle π ,show that the boundary [Formula: see text] consists of endpoints of minimizers either starting or ending in a cusp,analyze and plot the cones of reachable angles θ fin per spatial endpoint ( x fin , y fin ),relate the endings of association fields to [Formula: see text] and compute the length towards a cusp,analyze the exponential map both with the common arc-length parametrization t in the sub-Riemannian manifold [Formula: see text] and with spatial arc-length parametrization s in the plane [Formula: see text]. Surprisingly, s -parametrization simplifies the exponential map, the curvature formulas, the cusp-surface, and the boundary value problem,present a novel efficient algorithm solving the boundary value problem,show that sub-Riemannian geodesics solve Petitot's circle bundle model (cf. Petitot in J. Physiol. Paris 97:265-309, [2003]),show a clear similarity with association field lines and sub-Riemannian geodesics.

  17. Online matching with queueing dynamics.

    DOT National Transportation Integrated Search

    2016-12-01

    We consider a variant of the multiarmed bandit problem where jobs queue for service, and service rates of different servers may be unknown. We study algorithms that minimize queue-regret: the (expected) difference between the queue-lengths obtained b...

  18. Using Ant Colony Optimization for Routing in VLSI Chips

    NASA Astrophysics Data System (ADS)

    Arora, Tamanna; Moses, Melanie

    2009-04-01

    Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.

  19. Traveling salesman problem with a center.

    PubMed

    Lipowski, Adam; Lipowska, Dorota

    2005-06-01

    We study a traveling salesman problem where the path is optimized with a cost function that includes its length L as well as a certain measure C of its distance from the geometrical center of the graph. Using simulated annealing (SA) we show that such a problem has a transition point that separates two phases differing in the scaling behavior of L and C, in efficiency of SA, and in the shape of minimal paths.

  20. Improving Hospital-wide Patient Scheduling Decisions by Clinical Pathway Mining.

    PubMed

    Gartner, Daniel; Arnolds, Ines V; Nickel, Stefan

    2015-01-01

    Recent research has highlighted the need for solving hospital-wide patient scheduling problems. Inpatient scheduling, patient activities have to be scheduled on scarce hospital resources such that temporal relations between activities (e.g. for recovery times) are ensured. Common objectives are, among others, the minimization of the length of stay (LOS). In this paper, we consider a hospital-wide patient scheduling problem with LOS minimization based on uncertain clinical pathways. We approach the problem in three stages: First, we learn most likely clinical pathways using a sequential pattern mining approach. Second, we provide a mathematical model for patient scheduling and finally, we combine the two approaches. In an experimental study carried out using real-world data, we show that our approach outperforms baseline approaches on two metrics.

  1. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.

    PubMed

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.

  2. The Edge-Disjoint Path Problem on Random Graphs by Message-Passing

    PubMed Central

    2015-01-01

    We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102

  3. A genetic algorithm used for solving one optimization problem

    NASA Astrophysics Data System (ADS)

    Shipacheva, E. N.; Petunin, A. A.; Berezin, I. M.

    2017-12-01

    A problem of minimizing the length of the blank run for a cutting tool during cutting of sheet materials into shaped blanks is discussed. This problem arises during the preparation of control programs for computerized numerical control (CNC) machines. A discrete model of the problem is analogous in setting to the generalized travelling salesman problem with limitations in the form of precursor conditions determined by the technological features of cutting. A certain variant of a genetic algorithm for solving this problem is described. The effect of the parameters of the developed algorithm on the solution result for the problem with limitations is investigated.

  4. A heuristic for suffix solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilgory, A.; Gajski, D.D.

    1986-01-01

    The suffix problem has appeared in solutions of recurrence systems for parallel and pipelined machines and more recently in the design of gate and silicon compilers. In this paper the authors present two algorithms. The first algorithm generates parallel suffix solutions with minimum cost for a given length, time delay, availability of initial values, and fanout. This algorithm generates a minimal solution for any length n and depth range log/sub 2/ N to N. The second algorithm reduces the size of the solutions generated by the first algorithm.

  5. Some applications of the most general form of the higher-order GUP with minimal length uncertainty and maximal momentum

    NASA Astrophysics Data System (ADS)

    Shababi, Homa; Chung, Won Sang

    2018-04-01

    In this paper, using the new type of D-dimensional nonperturbative Generalized Uncertainty Principle (GUP) which has predicted both a minimal length uncertainty and a maximal observable momentum,1 first, we obtain the maximally localized states and express their connections to [P. Pedram, Phys. Lett. B 714, 317 (2012)]. Then, in the context of our proposed GUP and using the generalized Schrödinger equation, we solve some important problems including particle in a box and one-dimensional hydrogen atom. Next, implying modified Bohr-Sommerfeld quantization, we obtain energy spectra of quantum harmonic oscillator and quantum bouncer. Finally, as an example, we investigate some statistical properties of a free particle, including partition function and internal energy, in the presence of the mentioned GUP.

  6. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  7. Massive neutrinos and the pancake theory of galaxy formation

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    Three problems encountered by the pancake theory of galaxy formation in a massive neutrino-dominated universe are discussed. A nonlinear model for pancakes is shown to reconcile the data with the predicted coherence length and velocity field, and minimal predictions are given of the contribution from the large-scale matter distribution.

  8. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  9. The checkpoint ordering problem

    PubMed Central

    Hungerländer, P.

    2017-01-01

    Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574

  10. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less

  11. A wire length minimization approach to ocular dominance patterns in mammalian visual cortex

    NASA Astrophysics Data System (ADS)

    Chklovskii, Dmitri B.; Koulakov, Alexei A.

    2000-09-01

    The primary visual area (V1) of the mammalian brain is a thin sheet of neurons. Because each neuron is dominated by either right or left eye one can treat V1 as a binary mixture of neurons. The spatial arrangement of neurons dominated by different eyes is known as the ocular dominance (OD) pattern. We propose a theory for OD patterns based on the premise that they are evolutionary adaptations to minimize the length of intra-cortical connections. Thus, the existing OD patterns are obtained by solving a wire length minimization problem. We divide all the neurons into two classes: right- and left-eye dominated. We find that if the number of connections of each neuron with the neurons of the same class differs from that with the other class, the segregation of neurons into monocular regions indeed reduces the wire length. The shape of the regions depends on the relative number of neurons in the two classes. If both classes are equally represented we find that the optimal OD pattern consists of alternating stripes. If one class is less numerous than the other, the optimal OD pattern consists of patches of the underrepresented (ipsilateral) eye dominated neurons surrounded by the neurons of the other class. We predict the transition from stripes to patches when the fraction of neurons dominated by the ipsilateral eye is about 40%. This prediction agrees with the data in macaque and Cebus monkeys. Our theory can be applied to other binary cortical systems.

  12. Reconcile Planck-scale discreteness and the Lorentz-Fitzgerald contraction

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Speziale, Simone

    2003-03-01

    A Planck-scale minimal observable length appears in many approaches to quantum gravity. It is sometimes argued that this minimal length might conflict with Lorentz invariance, because a boosted observer can see the minimal length further Lorentz contracted. We show that this is not the case within loop quantum gravity. In loop quantum gravity the minimal length (more precisely, minimal area) does not appear as a fixed property of geometry, but rather as the minimal (nonzero) eigenvalue of a quantum observable. The boosted observer can see the same observable spectrum, with the same minimal area. What changes continuously in the boost transformation is not the value of the minimal length: it is the probability distribution of seeing one or the other of the discrete eigenvalues of the area. We discuss several difficulties associated with boosts and area measurement in quantum gravity. We compute the transformation of the area operator under a local boost, propose an explicit expression for the generator of local boosts, and give the conditions under which its action is unitary.

  13. Automatic phase control in solar power satellite systems

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Kantak, A. V.

    1978-01-01

    Various approaches to the problem of generating, maintaining and distributing a coherent, reference phase signal over a large area are suggested, mathematically modeled and analyzed with respect to their ability to minimize: phase build-up, beam diffusion and beam steering phase jitter, cable length, and maximize power transfer efficiency. In addition, phase control configurations are suggested which alleviate the need for layout symmetry.

  14. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  15. A Sharp methodology for VLSI layout

    NASA Astrophysics Data System (ADS)

    Bapat, Shekhar

    1993-01-01

    The layout problem for VLSI circuits is recognized as a very difficult problem and has been traditionally decomposed into the several seemingly independent sub-problems of placement, global routing, and detailed routing. Although this structure achieves a reduction in programming complexity, it is also typically accompanied by a reduction in solution quality. Most current placement research recognizes that the separation is artificial, and that the placement and routing problems should be solved ideally in tandem. We propose a new interconnection model, Sharp and an associated partitioning algorithm. The Sharp interconnection model uses a partitioning shape that roughly resembles the musical sharp 'number sign' and makes extensive use of pre-computed rectilinear Steiner trees. The model is designed to generate strategic routing information along with the partitioning results. Additionally, the Sharp model also generates estimates of the routing congestion. We also propose the Sharp layout heuristic that solves the layout problem in its entirety. The Sharp layout heuristic makes extensive use of the Sharp partitioning model. The use of precomputed Steiner tree forms enables the method to model accurately net characteristics. For example, the Steiner tree forms can model both the length of the net and more importantly its route. In fact, the tree forms are also appropriate for modeling the timing delays of nets. The Sharp heuristic works to minimize both the total layout area by minimizing total net length (thus reducing the total wiring area), and the congestion imbalances in the various channels (thus reducing the unused or wasted channel area). Our heuristic uses circuit element movements amongst the different partitioning blocks and selection of alternate minimal Steiner tree forms to achieve this goal. The objective function for the algorithm can be modified readily to include other important circuit constraints like propagation delays. The layout technique first computes a very high-level approximation of the layout solution (i.e., the positions of the circuit elements and the associated net routes). The approximate solution is alternately refined, objective function. The technique creates well defined sub-problems and offers intermediary steps that can be solved in parallel, as well as a parallel mechanism to merge the sub-problem solutions.

  16. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  17. Angular momentum and Zeeman effect in the presence of a minimal length based on the Kempf-Mann-Mangano algebra

    NASA Astrophysics Data System (ADS)

    Khosropour, B.

    2016-07-01

    In this work, we consider a D-dimensional ( β, β^' -two-parameters deformed Heisenberg algebra, which was introduced by Kempf et al. The angular-momentum operator in the presence of a minimal length scale based on the Kempf-Mann-Mangano algebra is obtained in the special case of β^' = 2β up to the first order over the deformation parameter β . It is shown that each of the components of the modified angular-momentum operator, commutes with the modified operator {L}2 . We find the magnetostatic field in the presence of a minimal length. The Zeeman effect in the deformed space is studied and also Lande's formula for the energy shift in the presence of a minimal length is obtained. We estimate an upper bound on the isotropic minimal length.

  18. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  19. On the geodetic applications of simultaneous range-differencing to LAGEOS

    NASA Technical Reports Server (NTRS)

    Pablis, E. C.

    1982-01-01

    The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.

  20. Simulated annealing with restart strategy for the blood pickup routing problem

    NASA Astrophysics Data System (ADS)

    Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.

    2018-04-01

    This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.

  1. Alternative mathematical programming formulations for FSS synthesis

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.

    1986-01-01

    A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.

  2. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  3. Microneedles for Transdermal Biosensing: Current Picture and Future Direction.

    PubMed

    Ventrelli, Letizia; Marsilio Strambini, Lucanos; Barillaro, Giuseppe

    2015-12-09

    A novel trend is rapidly emerging in the use of microneedles, which are a miniaturized replica of hypodermic needles with length-scales of hundreds of micrometers, aimed at the transdermal biosensing of analytes of clinical interest, e.g., glucose, biomarkers, and others. Transdermal biosensing via microneedles offers remarkable opportunities for moving biosensing technologies and biochips from research laboratories to real-field applications, and envisages easy-to-use point-of-care microdevices with pain-free, minimally invasive, and minimal-training features that are very attractive for both developed and emerging countries. In addition to this, microneedles for transdermal biosensing offer a unique possibility for the development of biochips provided with end-effectors for their interaction with the biological system under investigation. Direct and efficient collection of the biological sample to be analyzed will then become feasible in situ at the same length-scale of the other biochip components by minimally trained personnel and in a minimally invasive fashion. This would eliminate the need for blood extraction using hypodermic needles and reduce, in turn, related problems, such as patient infections, sample contaminations, analysis artifacts, etc. The aim here is to provide a thorough and critical analysis of state-of-the-art developments in this novel research trend, and to bridge the gap between microneedles and biosensors. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.

    2018-01-01

    The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.

  5. On the Duffin-Kemmer-Petiau equation with linear potential in the presence of a minimal length

    NASA Astrophysics Data System (ADS)

    Chargui, Yassine

    2018-04-01

    We point out an erroneous handling in the literature regarding solutions of the (1 + 1)-dimensional Duffin-Kemmer-Petiau equation with linear potentials in the context of quantum mechanics with minimal length. Furthermore, using Brau's approach, we present a perturbative treatment of the effect of the minimal length on bound-state solutions when a Lorentz-scalar linear potential is applied.

  6. Lagrangian Formulation of a Magnetostatic Field in the Presence of a Minimal Length Scale Based on the Kempf Algebra

    NASA Astrophysics Data System (ADS)

    Moayedi, S. K.; Setare, M. R.; Khosropour, B.

    2013-11-01

    In the 1990s, Kempf and his collaborators Mangano and Mann introduced a D-dimensional (β, β‧)-two-parameter deformed Heisenberg algebra which leads to an isotropic minimal length (\\triangle Xi)\\min = \\hbar √ {Dβ +β '}, \\forall i\\in \\{1, 2, ..., D\\}. In this work, the Lagrangian formulation of a magnetostatic field in three spatial dimensions (D = 3) described by Kempf algebra is presented in the special case of β‧ = 2β up to the first-order over β. We show that at the classical level there is a similarity between magnetostatics in the presence of a minimal length scale (modified magnetostatics) and the magnetostatic sector of the Abelian Lee-Wick model in three spatial dimensions. The integral form of Ampere's law and the energy density of a magnetostatic field in the modified magnetostatics are obtained. Also, the Biot-Savart law in the modified magnetostatics is found. By studying the effect of minimal length corrections to the gyromagnetic moment of the muon, we conclude that the upper bound on the isotropic minimal length scale in three spatial dimensions is 4.42×10-19 m. The relationship between magnetostatics with a minimal length and the Gaete-Spallucci nonlocal magnetostatics [J. Phys. A: Math. Theor. 45, 065401 (2012)] is investigated.

  7. Dirac δ -function potential in quasiposition representation of a minimal-length scenario

    NASA Astrophysics Data System (ADS)

    Gusson, M. F.; Gonçalves, A. Oakes O.; Francisco, R. O.; Furtado, R. G.; Fabris, J. C.; Nogueira, J. A.

    2018-03-01

    A minimal-length scenario can be considered as an effective description of quantum gravity effects. In quantum mechanics the introduction of a minimal length can be accomplished through a generalization of Heisenberg's uncertainty principle. In this scenario, state eigenvectors of the position operator are no longer physical states and the representation in momentum space or a representation in a quasiposition space must be used. In this work, we solve the Schroedinger equation with a Dirac δ -function potential in quasiposition space. We calculate the bound state energy and the coefficients of reflection and transmission for the scattering states. We show that leading corrections are of order of the minimal length ({ O}(√{β })) and the coefficients of reflection and transmission are no longer the same for the Dirac delta well and barrier as in ordinary quantum mechanics. Furthermore, assuming that the equivalence of the 1s state energy of the hydrogen atom and the bound state energy of the Dirac {{δ }}-function potential in the one-dimensional case is kept in a minimal-length scenario, we also find that the leading correction term for the ground state energy of the hydrogen atom is of the order of the minimal length and Δx_{\\min } ≤ 10^{-25} m.

  8. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  9. A 3/2-Approximation Algorithm for Multiple Depot Multiple Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Xu, Zhou; Rodrigues, Brian

    As an important extension of the classical traveling salesman problem (TSP), the multiple depot multiple traveling salesman problem (MDMTSP) is to minimize the total length of a collection of tours for multiple vehicles to serve all the customers, where each vehicle must start or stay at its distinct depot. Due to the gap between the existing best approximation ratios for the TSP and for the MDMTSP in literature, which are 3/2 and 2, respectively, it is an open question whether or not a 3/2-approximation algorithm exists for the MDMTSP. We have partially addressed this question by developing a 3/2-approximation algorithm, which runs in polynomial time when the number of depots is a constant.

  10. Entropy of Vaidya Black Hole on Apparent Horizon with Minimal Length Revisited

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Wu, Bin; Sun, Cheng-yi; Song, Yu; Yue, Rui-hong

    2018-03-01

    By considering the generalized uncertainty principle, the degrees of freedom near the apparent horizon of Vaidya black hole are calculated with the thin film model. The result shows that a cut-off can be introduced naturally rather than taking by hand. Furthermore, if the minimal length is chosen to be a specific value, the statistical entropy will satisfy the conventional area law at the horizon, which might reveal some deep things of the minimal length.

  11. Entropy of Vaidya Black Hole on Apparent Horizon with Minimal Length Revisited

    NASA Astrophysics Data System (ADS)

    Tang, Hao; Wu, Bin; Sun, Cheng-yi; Song, Yu; Yue, Rui-hong

    2018-07-01

    By considering the generalized uncertainty principle, the degrees of freedom near the apparent horizon of Vaidya black hole are calculated with the thin film model. The result shows that a cut-off can be introduced naturally rather than taking by hand. Furthermore, if the minimal length is chosen to be a specific value, the statistical entropy will satisfy the conventional area law at the horizon, which might reveal some deep things of the minimal length.

  12. Optimizing homeostatic cell renewal in hierarchical tissues

    PubMed Central

    Fider, Nicole A.

    2018-01-01

    In order to maintain homeostasis, mature cells removed from the top compartment of hierarchical tissues have to be replenished by means of differentiation and self-renewal events happening in the more primitive compartments. As each cell division is associated with a risk of mutation, cell division patterns have to be optimized, in order to minimize or delay the risk of malignancy generation. Here we study this optimization problem, focusing on the role of division tree length, that is, the number of layers of cells activated in response to the loss of terminally differentiated cells, which is related to the balance between differentiation and self-renewal events in the compartments. Using both analytical methods and stochastic simulations in a metapopulation-style model, we find that shorter division trees are advantageous if the objective is to minimize the total number of one-hit mutants in the cell population. Longer division trees on the other hand minimize the accumulation of two-hit mutants, which is a more likely evolutionary goal given the key role played by tumor suppressor genes in cancer initiation. While division tree length is the most important property determining mutant accumulation, we also find that increasing the size of primitive compartments helps to delay two-hit mutant generation. PMID:29447149

  13. An Interactive Life Cycle Cost Forecasting Tool

    DTIC Science & Technology

    1990-03-01

    of Phase in period PO - Length of Phase out period PV - Present value viii AFIT/GOR/ENS/90M-17 Abstract A tool was developed for Monte Carlo...and B. Note that this is for a given configuration. The E represents effectiveness and is equated to some function of the quantity of systems A and B...purchased. Either strategy, maximizing effectiveness or minimizing cost, leads to some type of cost comparison among the proposed systems. The problem

  14. The In-Transit Vigilant Covering Tour Problem of Routing Unmanned Ground Vehicles

    DTIC Science & Technology

    2012-08-01

    of vertices in both vertex sets V and W, rather than exclusively in the vertex set V. A metaheuristic algorithm which follows the Greedy Randomized...window (VRPTW) approach, with the application of Java-encoded metaheuristic , was used [O’Rourke et al., 2001] for the dynamic routing of UAVs. Harder et...minimize both the two conflicting objectives; tour length and the coverage distance via a multi-objective evolutionary algorithm . This approach avoids a

  15. Minimal Length Scale Scenarios for Quantum Gravity.

    PubMed

    Hossenfelder, Sabine

    2013-01-01

    We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.

  16. An Active Heater Control Concept to Meet IXO Type Mirror Module Thermal-Structural Distortion Requirement

    NASA Technical Reports Server (NTRS)

    Choi, Michael

    2013-01-01

    Flight mirror assemblies (FMAs) of large telescopes, such as the International X-ray Observatory (IXO), have very stringent thermal-structural distortion requirements. The spatial temperature gradient requirement within a FMA could be as small as 0.05 C. Con ventionally, heaters and thermistors are attached to the stray light baffle (SLB), and centralized heater controllers (i.e., heater controller boards located in a large electronics box) are used. Due to the large number of heater harnesses, accommodating and routing them is extremely difficult. The total harness length/mass is very large. This innovation uses a thermally conductive pre-collimator to accommodate heaters and a distributed heater controller approach. It minimizes the harness length and mass, and reduces the problem of routing and accommodating them. Heaters and thermistors are attached to a short (4.67 cm) aluminum portion of the pre-collimator, which is thermally coupled to the SLB. Heaters, which have a very small heater power density, and thermistors are attached to the exterior of all the mirror module walls. The major portion (23.4 cm) of the pre-collimator for the middle and outer modules is made of thin, non-conductive material. It minimizes the view factors from the FMA and heated portion of the precollimator to space. It also minimizes heat conduction from one end of the FMA to the other. Small and multi-channel heater controllers, which have adjustable set points and internal redundancy, are used. They are mounted to the mechanical support structure members adjacent to each module. The IXO FMA, which is 3.3 m in diameter, is an example of a large telescope. If the heater controller boards are centralized, routing and accommodating heater harnesses is extremely difficult. This innovation has the following advantages. It minimizes the length/mass of the heater harness between the heater controllers and heater circuits. It reduces the problem of routing and accommodating the harness on the FMA. It reduces the risk of X-ray attenuation caused by the heater harness. Its adjustable set point capability eliminates the need for survival heater circuits. The operating mode heater circuits can also be used as survival heater circuits. In the non-operating mode, a lower set point is used.

  17. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind.

    PubMed

    Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.

  18. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind

    PubMed Central

    Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888

  19. Analytical solution of Schrödinger equation in minimal length formalism for trigonometric potential using hypergeometry method

    NASA Astrophysics Data System (ADS)

    Nurhidayati, I.; Suparmi, A.; Cari, C.

    2018-03-01

    The Schrödinger equation has been extended by applying the minimal length formalism for trigonometric potential. The wave function and energy spectra were used to describe the behavior of subatomic particle. The wave function and energy spectra were obtained by using hypergeometry method. The result showed that the energy increased by the increasing both of minimal length parameter and the potential parameter. The energy were calculated numerically using MatLab.

  20. Vibration suppression for large scale adaptive truss structures using direct output feedback control

    NASA Technical Reports Server (NTRS)

    Lu, Lyan-Ywan; Utku, Senol; Wada, Ben K.

    1993-01-01

    In this article, the vibration control of adaptive truss structures, where the control actuation is provided by length adjustable active members, is formulated as a direct output feedback control problem. A control method named Model Truncated Output Feedback (MTOF) is presented. The method allows the control feedback gain to be determined in a decoupled and truncated modal space in which only the critical vibration modes are retained. The on-board computation required by MTOF is minimal; thus, the method is favorable for the applications of vibration control of large scale structures. The truncation of the modal space inevitably introduces spillover effect during the control process. In this article, the effect is quantified in terms of active member locations, and it is shown that the optimal placement of active members, which minimizes the spillover effect (and thus, maximizes the control performance) can be sought. The problem of optimally selecting the locations of active members is also treated.

  1. Site-directed protein recombination as a shortest-path problem.

    PubMed

    Endelman, Jeffrey B; Silberg, Jonathan J; Wang, Zhen-Gang; Arnold, Frances H

    2004-07-01

    Protein function can be tuned using laboratory evolution, in which one rapidly searches through a library of proteins for the properties of interest. In site-directed recombination, n crossovers are chosen in an alignment of p parents to define a set of p(n + 1) peptide fragments. These fragments are then assembled combinatorially to create a library of p(n+1) proteins. We have developed a computational algorithm to enrich these libraries in folded proteins while maintaining an appropriate level of diversity for evolution. For a given set of parents, our algorithm selects crossovers that minimize the average energy of the library, subject to constraints on the length of each fragment. This problem is equivalent to finding the shortest path between nodes in a network, for which the global minimum can be found efficiently. Our algorithm has a running time of O(N(3)p(2) + N(2)n) for a protein of length N. Adjusting the constraints on fragment length generates a set of optimized libraries with varying degrees of diversity. By comparing these optima for different sets of parents, we rapidly determine which parents yield the lowest energy libraries.

  2. Lossless quantum data compression with exponential penalization: an operational interpretation of the quantum Rényi entropy.

    PubMed

    Bellomo, Guido; Bosyk, Gustavo M; Holik, Federico; Zozor, Steeve

    2017-11-07

    Based on the problem of quantum data compression in a lossless way, we present here an operational interpretation for the family of quantum Rényi entropies. In order to do this, we appeal to a very general quantum encoding scheme that satisfies a quantum version of the Kraft-McMillan inequality. Then, in the standard situation, where one is intended to minimize the usual average length of the quantum codewords, we recover the known results, namely that the von Neumann entropy of the source bounds the average length of the optimal codes. Otherwise, we show that by invoking an exponential average length, related to an exponential penalization over large codewords, the quantum Rényi entropies arise as the natural quantities relating the optimal encoding schemes with the source description, playing an analogous role to that of von Neumann entropy.

  3. Dendritic and Axonal Wiring Optimization of Cortical GABAergic Interneurons.

    PubMed

    Anton-Sanchez, Laura; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro

    2016-10-01

    The way in which a neuronal tree expands plays an important role in its functional and computational characteristics. We aimed to study the existence of an optimal neuronal design for different types of cortical GABAergic neurons. To do this, we hypothesized that both the axonal and dendritic trees of individual neurons optimize brain connectivity in terms of wiring length. We took the branching points of real three-dimensional neuronal reconstructions of the axonal and dendritic trees of different types of cortical interneurons and searched for the minimal wiring arborization structure that respects the branching points. We compared the minimal wiring arborization with real axonal and dendritic trees. We tested this optimization problem using a new approach based on graph theory and evolutionary computation techniques. We concluded that neuronal wiring is near-optimal in most of the tested neurons, although the wiring length of dendritic trees is generally nearer to the optimum. Therefore, wiring economy is related to the way in which neuronal arborizations grow irrespective of the marked differences in the morphology of the examined interneurons.

  4. Quantum Gravitational Corrections to the Real Klein-Gordon Field in the Presence of a Minimal Length

    NASA Astrophysics Data System (ADS)

    Moayedi, S. K.; Setare, M. R.; Moayeri, H.

    2010-09-01

    The ( D+1)-dimensional ( β, β')-two-parameter Lorentz-covariant deformed algebra introduced by Quesne and Tkachuk (J. Phys., A Math. Gen. 39, 10909, 2006), leads to a nonzero minimal uncertainty in position (minimal length). The Klein-Gordon equation in a (3+1)-dimensional space-time described by Quesne-Tkachuk Lorentz-covariant deformed algebra is studied in the case where β'=2 β up to first order over deformation parameter β. It is shown that the modified Klein-Gordon equation which contains fourth-order derivative of the wave function describes two massive particles with different masses. We have shown that physically acceptable mass states can only exist for β<1/8m^{2c2} which leads to an isotropic minimal length in the interval 10-17 m<(Δ X i )0<10-15 m. Finally, we have shown that the above estimation of minimal length is in good agreement with the results obtained in previous investigations.

  5. Factors in hospice patients' length of stay.

    PubMed

    Frantz, T T; Lawrence, J C; Somov, P G; Somova, M J

    1999-01-01

    Many hospice patients are referred comparatively late in the course of their disease progression, therefore minimizing the time of services to the patient, caregivers, and families. Untimely referrals can create organizational, clinical, and emotional problems for all involved; a better understanding of the factors related to length of stay (LOS) in hospice is necessary. This study investigated the relationship between LOS and selected variables. There were significant differences in LOS by diagnosis, physician type, and referral source. No significant differences were found in LOS by gender or insurance type. Factors related to LOS can assist hospices in identifying those particular patients more likely to have longer stays. Additionally, administrators may tailor their programs to meet the needs of the individual hospice.

  6. Optimization of multimagnetometer systems on a spacecraft

    NASA Technical Reports Server (NTRS)

    Neubauer, F. M.

    1975-01-01

    The problem of optimizing the position of magnetometers along a boom of given length to yield a minimized total error is investigated. The discussion is limited to at most four magnetometers, which seems to be a practical limit due to weight, power, and financial considerations. The outlined error analysis is applied to some illustrative cases. The optimal magnetometer locations, for which the total error is minimum, are computed for given boom length, instrument errors, and very conservative magnetic field models characteristic for spacecraft with only a restricted or ineffective magnetic cleanliness program. It is shown that the error contribution by the magnetometer inaccuracy is increased as the number of magnetometers is increased, whereas the spacecraft field uncertainty is diminished by an appreciably larger amount.

  7. Large-scale evidence of dependency length minimization in 37 languages

    PubMed Central

    Futrell, Richard; Mahowald, Kyle; Gibson, Edward

    2015-01-01

    Explaining the variation between human languages and the constraints on that variation is a core goal of linguistics. In the last 20 y, it has been claimed that many striking universals of cross-linguistic variation follow from a hypothetical principle that dependency length—the distance between syntactically related words in a sentence—is minimized. Various models of human sentence production and comprehension predict that long dependencies are difficult or inefficient to process; minimizing dependency length thus enables effective communication without incurring processing difficulty. However, despite widespread application of this idea in theoretical, empirical, and practical work, there is not yet large-scale evidence that dependency length is actually minimized in real utterances across many languages; previous work has focused either on a small number of languages or on limited kinds of data about each language. Here, using parsed corpora of 37 diverse languages, we show that overall dependency lengths for all languages are shorter than conservative random baselines. The results strongly suggest that dependency length minimization is a universal quantitative property of human languages and support explanations of linguistic variation in terms of general properties of human information processing. PMID:26240370

  8. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  9. Implementation of pattern generation algorithm in forming Gilmore and Gomory model for two dimensional cutting stock problem

    NASA Astrophysics Data System (ADS)

    Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.

    2018-01-01

    Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.

  10. Final Report - High-Order Spectral Volume Method for the Navier-Stokes Equations On Unstructured Tetrahedral Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z J

    2012-12-06

    The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less

  11. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Fingerprints selection for topological localization

    NASA Astrophysics Data System (ADS)

    Popov, Vladimir

    2017-07-01

    Problems of visual navigation are extensively studied in contemporary robotics. In particular, we can mention different problems of visual landmarks selection, the problem of selection of a minimal set of visual landmarks, selection of partially distinguishable guards, the problem of placement of visual landmarks. In this paper, we consider one-dimensional color panoramas. Such panoramas can be used for creating fingerprints. Fingerprints give us unique identifiers for visually distinct locations by recovering statistically significant features. Fingerprints can be used as visual landmarks for the solution of various problems of mobile robot navigation. In this paper, we consider a method for automatic generation of fingerprints. In particular, we consider the bounded Post correspondence problem and applications of the problem to consensus fingerprints and topological localization. We propose an efficient approach to solve the bounded Post correspondence problem. In particular, we use an explicit reduction from the decision version of the problem to the satisfiability problem. We present the results of computational experiments for different satisfiability algorithms. In robotic experiments, we consider the average accuracy of reaching of the target point for different lengths of routes and types of fingerprints.

  13. Optimization of municipal solid waste collection and transportation routes.

    PubMed

    Das, Swapan; Bhattacharyya, Bidyut Kr

    2015-09-01

    Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scatter throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  15. Acoustic transducer apparatus with reduced thermal conduction

    NASA Technical Reports Server (NTRS)

    Lierke, Ernst G. (Inventor); Leung, Emily W. (Inventor); Bhat, Balakrishna T. (Inventor)

    1990-01-01

    A horn is described for transmitting sound from a transducer to a heated chamber containing an object which is levitated by acoustic energy while it is heated to a molten state, which minimizes heat transfer to thereby minimize heating of the transducer, minimize temperature variation in the chamber, and minimize loss of heat from the chamber. The forward portion of the horn, which is the portion closest to the chamber, has holes that reduce its cross-sectional area to minimize the conduction of heat along the length of the horn, with the entire front portion of the horn being rigid and having an even front face to efficiently transfer high frequency acoustic energy to fluid in the chamber. In one arrangement, the horn has numerous rows of holes extending perpendicular to the length of horn, with alternate rows extending perpendicular to one another to form a sinuous path for the conduction of heat along the length of the horn.

  16. Detection of ɛ-ergodicity breaking in experimental data—A study of the dynamical functional sensibility

    NASA Astrophysics Data System (ADS)

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-01

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ɛ-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ɛ for ɛ-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  17. Detection of ε-ergodicity breaking in experimental data-A study of the dynamical functional sensibility.

    PubMed

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-28

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ε-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ε for ε-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  18. Quantum theory of the generalised uncertainty principle

    NASA Astrophysics Data System (ADS)

    Bruneton, Jean-Philippe; Larena, Julien

    2017-04-01

    We extend significantly previous works on the Hilbert space representations of the generalized uncertainty principle (GUP) in 3 + 1 dimensions of the form [X_i,P_j] = i F_{ij} where F_{ij} = f({{P}}^2) δ _{ij} + g({{P}}^2) P_i P_j for any functions f. However, we restrict our study to the case of commuting X's. We focus in particular on the symmetries of the theory, and the minimal length that emerge in some cases. We first show that, at the algebraic level, there exists an unambiguous mapping between the GUP with a deformed quantum algebra and a quadratic Hamiltonian into a standard, Heisenberg algebra of operators and an aquadratic Hamiltonian, provided the boost sector of the symmetries is modified accordingly. The theory can also be mapped to a completely standard Quantum Mechanics with standard symmetries, but with momentum dependent position operators. Next, we investigate the Hilbert space representations of these algebraically equivalent models, and focus specifically on whether they exhibit a minimal length. We carry the functional analysis of the various operators involved, and show that the appearance of a minimal length critically depends on the relationship between the generators of translations and the physical momenta. In particular, because this relationship is preserved by the algebraic mapping presented in this paper, when a minimal length is present in the standard GUP, it is also present in the corresponding Aquadratic Hamiltonian formulation, despite the perfectly standard algebra of this model. In general, a minimal length requires bounded generators of translations, i.e. a specific kind of quantization of space, and this depends on the precise shape of the function f defined previously. This result provides an elegant and unambiguous classification of which universal quantum gravity corrections lead to the emergence of a minimal length.

  19. Friction pull plug welding: chamfered heat sink pull plug design

    NASA Technical Reports Server (NTRS)

    Coletta, Edmond R. (Inventor); Cantrell, Mark A. (Inventor)

    2002-01-01

    Friction Pull Plug Welding (FPPW) is a solid state repair process for defects up to one inch in length, only requiring single sided tooling (OSL) for usage on flight hardware. Experimental data has shown that the mass of plug heat sink remaining above the top of the plate surface after a weld is completed (the plug heat sink) affects the bonding at the plug top. A minimized heat sink ensures complete bonding of the plug to the plate at the plug top. However, with a minimal heat sink three major problems can arise, the entire plug could be pulled through the plate hole, the central portion of the plug could be separated along grain boundaries, or the plug top hat can be separated from the body. The Chamfered Heat Sink Pull Plug Design allows for complete bonding along the ISL interface through an outside diameter minimal mass heat sink, while maintaining enough central mass in the plug to prevent plug pull through, central separation, and plug top hat separation.

  20. Collective motion in prolate γ-rigid nuclei within minimal length concept via a quantum perturbation method

    NASA Astrophysics Data System (ADS)

    Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.

    2018-05-01

    Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.

  1. Interface stability in a slowly rotating low-gravity tank Theory

    NASA Technical Reports Server (NTRS)

    Gans, R. F.; Leslie, F. W.

    1986-01-01

    The equilibrium configuration of a bubble in a rotating liquid confined by flat axial boundaries (baffles) is found. The maximum baffle spacing assuring bubble confinement is bounded from above by the natural length of a bubble in an infinite medium under the same conditions. Effects of nonzero contact angle are minimal. The problem of dynamic stability is posed. It can be solved in the limit of rapid rotation, for which the bubble is a long cylinder. Instability is to axisymmetric perturbations; nonaxisymmetric perturbations are stable. The stability criterion agrees with earlier results.

  2. Ultra-low-loss tapered optical fibers with minimal lengths

    NASA Astrophysics Data System (ADS)

    Nagai, Ryutaro; Aoki, Takao

    2014-11-01

    We design and fabricate ultra-low-loss tapered optical fibers (TOFs) with minimal lengths. We first optimize variations of the torch scan length using the flame-brush method for fabricating TOFs with taper angles that satisfy the adiabaticity criteria. We accordingly fabricate TOFs with optimal shapes and compare their transmission to TOFs with a constant taper angle and TOFs with an exponential shape. The highest transmission measured for TOFs with an optimal shape is in excess of 99.7 % with a total TOF length of only 23 mm, whereas TOFs with a constant taper angle of 2 mrad reach 99.6 % transmission for a 63 mm TOF length.

  3. A Rational Approach to Determine Minimum Strength Thresholds in Novel Structural Materials

    NASA Technical Reports Server (NTRS)

    Schur, Willi W.; Bilen, Canan; Sterling, Jerry

    2003-01-01

    Design of safe and survivable structures requires the availability of guaranteed minimum strength thresholds for structural materials to enable a meaningful comparison of strength requirement and available strength. This paper develops a procedure for determining such a threshold with a desired degree of confidence, for structural materials with none or minimal industrial experience. The problem arose in attempting to use a new, highly weight-efficient structural load tendon material to achieve a lightweight super-pressure balloon. The developed procedure applies to lineal (one dimensional) structural elements. One important aspect of the formulation is that it extrapolates to expected probability distributions for long length specimen samples from some hypothesized probability distribution that has been obtained from a shorter length specimen sample. The use of the developed procedure is illustrated using both real and simulated data.

  4. Linear Matrix Inequality Method for a Quadratic Performance Index Minimization Problem with a class of Bilinear Matrix Inequality Conditions

    NASA Astrophysics Data System (ADS)

    Tanemura, M.; Chida, Y.

    2016-09-01

    There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.

  5. Overuse of helicopter transport in the minimally injured: A health care system problem that should be corrected.

    PubMed

    Vercruysse, Gary A; Friese, Randall S; Khalil, Mazhar; Ibrahim-Zada, Irada; Zangbar, Bardiya; Hashmi, Ammar; Tang, Andrew; O'Keeffe, Terrence; Kulvatunyou, Narong; Green, Donald J; Gries, Lynn; Joseph, Bellal; Rhee, Peter M

    2015-03-01

    Mortality benefit has been demonstrated for trauma patients transported via helicopter but at great cost. This study identified patients who did not benefit from helicopter transport to our facility and demonstrates potential cost savings when transported instead by ground. We performed a 6-year (2007-2013) retrospective analysis of all trauma patients presenting to our center. Patients with a known mode of transfer were included in the study. Patients with missing data and those who were dead on arrival were excluded from the study. Patients were then dichotomized into helicopter transfer and ground transfer groups. A subanalysis was performed between minimally injured patients (ISS < 5) in both the groups after propensity score matching for demographics, injury severity parameters, and admission vital parameters. Groups were then compared for hospital and emergency department length of stay, early discharge, and mortality. Of 5,202 transferred patients, 18.9% (981) were transferred via helicopter and 76.7% (3,992) were transferred via ground transport. Helicopter-transferred patients had longer hospital (p = 0.001) and intensive care unit (p = 0.001) stays. There was no difference in mortality between the groups (p = 0.6).On subanalysis of minimally injured patients there was no difference in hospital length of stay (p = 0.1) and early discharge (p = 0.6) between the helicopter transfer and ground transfer group. Average helicopter transfer cost at our center was $18,000, totaling $4,860,000 for 270 minimally injured helicopter-transferred patients. Nearly one third of patients transported by helicopter were minimally injured. Policies to identify patients who do not benefit from helicopter transport should be developed. Significant reduction in transport cost can be made by judicious selection of patients. Education to physicians calling for transport and identification of alternate means of transportation would be both safe and financially beneficial to our system. Epidemiologic study, level III. Therapeutic study, level IV.

  6. Attitude control of the LACE satellite: A gravity gradient stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    Ivory, J. E.; Campion, R. E.; Bakeris, D. F.

    1993-01-01

    The Low-power Atmospheric Compensation Experiment (LACE) satellite was launched in February 1990 by the Naval Research Laboratory. The spacecraft's pitch and roll are maintained with a gravity gradient boom and a magnetic damper. There are two other booms with much smaller tip masses, one in the velocity direction (lead boom) of variable length and the other in the opposite direction (balance boom) also of variable length. In addition, the system uses a momentum wheel with its axis perpendicular to the plane of the orbit to control yaw and keep these booms in the orbital plane. The primary LACE experiment requires that the lead boom be moved to lengths varying from 4.6 m to 45.7 m. This and other onboard experiments require that the spacecraft attitude remain within tight constraints while operating. The problem confronting the satellite operators was to move the lead boom without inducing a net spacecraft attitude disturbance. A description of a method used to change the length of the lead boom while minimizing the disturbance to the attitude of the spacecraft is given. Deadbeating to dampen pitch oscillations has also been accomplished by maneuvering either the lead or balance boom and is discussed.

  7. Physics on the Smallest Scales: An Introduction to Minimal Length Phenomenology

    ERIC Educational Resources Information Center

    Sprenger, Martin; Nicolini, Piero; Bleicher, Marcus

    2012-01-01

    Many modern theories which try to unify gravity with the Standard Model of particle physics, such as e.g. string theory, propose two key modifications to the commonly known physical theories: the existence of additional space dimensions; the existence of a minimal length distance or maximal resolution. While extra dimensions have received a wide…

  8. Modelling DC responses of 3D complex fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  9. Modelling DC responses of 3D complex fracture networks

    DOE PAGES

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    2018-03-01

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  10. Gender and age related differences in foot morphology.

    PubMed

    Tomassoni, Daniele; Traini, Enea; Amenta, Francesco

    2014-12-01

    This study has assessed age-related changes of foot morphology for developing appropriate footwear with particular reference to the elderly. Anatomical parameters such as foot length, circumference and height and ankle length, circumference and height were assessed in a sample of males (n=577) and females (n=528) divided into three age groups. The groups included young-adult, aged between 20 and 25 years; adult, aged between 35 and 55 years; and old, aged between 65 and 70 years individuals. In terms of gender differences, in young-adult individuals the sex-related morphological differences observed, are just related to a significantly lower length of foot in females. In adult subjects morphological parameters investigated were significantly lower in females even after normalization for foot length. In old individuals, no differences of the parameters were found after normalization for foot length. Comparative analysis of morphometric data between young-adult and adult individuals revealed that the instep length was smaller in adults. The opposite was observed for the great toe and medial foot arch height. Length of ankle was higher in adult than in young-adult individuals, whereas ankle circumference and height were smaller. In old vs adult individuals foot circumference showed the most relevant age-related differences. Feet anatomy presents specific characteristics in different ages of life. The ideal footwear should take into account these characteristics. This is true primarily for the elderly for minimizing the risk of falls or of other problems related to inappropriate footwear. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Custom map projections for regional groundwater models

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2017-01-01

    For regional groundwater flow models (areas greater than 100,000 km2), improper choice of map projection parameters can result in model error for boundary conditions dependent on area (recharge or evapotranspiration simulated by application of a rate using cell area from model discretization) and length (rivers simulated with head-dependent flux boundary). Smaller model areas can use local map coordinates, such as State Plane (United States) or Universal Transverse Mercator (correct zone) without introducing large errors. Map projections vary in order to preserve one or more of the following properties: area, shape, distance (length), or direction. Numerous map projections are developed for different purposes as all four properties cannot be preserved simultaneously. Preservation of area and length are most critical for groundwater models. The Albers equal-area conic projection with custom standard parallels, selected by dividing the length north to south by 6 and selecting standard parallels 1/6th above or below the southern and northern extent, preserves both area and length for continental areas in mid latitudes oriented east-west. Custom map projection parameters can also minimize area and length error in non-ideal projections. Additionally, one must also use consistent vertical and horizontal datums for all geographic data. The generalized polygon for the Floridan aquifer system study area (306,247.59 km2) is used to provide quantitative examples of the effect of map projections on length and area with different projections and parameter choices. Use of improper map projection is one model construction problem easily avoided.

  12. Improved Synthesis and In Vitro Evaluation of an Aptamer Ribosomal Toxin Conjugate

    PubMed Central

    Kelly, Linsley; Kratschmer, Christina; Maier, Keith E.; Yan, Amy C.

    2016-01-01

    Delivery of toxins, such as the ricin A chain, Pseudomonas exotoxin, and gelonin, using antibodies has had some success in inducing specific toxicity in cancer treatments. However, these antibody-toxin conjugates, called immunotoxins, can be bulky, difficult to express, and may induce an immune response upon in vivo administration. We previously reported delivery of a recombinant variant of gelonin (rGel) by the full-length prostate-specific membrane antigen (PSMA) binding aptamer, A9, to potentially circumvent some of these problems. Here, we report a streamlined approach to generating aptamer-rGel conjugates utilizing a chemically synthesized minimized form of the A9 aptamer. Unlike the full-length A9 aptamer, this minimized variant can be chemically synthesized with a 5′ terminal thiol. This facilitates the large scale synthesis and generation of aptamer toxin conjugates linked by a reducible disulfide linkage. Using this approach, we generated aptamer-toxin conjugates and evaluated their binding specificity and toxicity. On PSMA(+) LNCaP prostate cancer cells, the A9.min-rGel conjugate demonstrated an IC50 of ∼60 nM. Additionally, we performed a stability analysis of this conjugate in mouse serum where the conjugate displayed a t1/2 of ∼4 h, paving the way for future in vivo experiments. PMID:27228412

  13. Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints

    PubMed Central

    Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.

    2015-01-01

    Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575

  14. Finding local genome rearrangements.

    PubMed

    Simonaitis, Pijus; Swenson, Krister M

    2018-01-01

    The double cut and join (DCJ) model of genome rearrangement is well studied due to its mathematical simplicity and power to account for the many events that transform gene order. These studies have mostly been devoted to the understanding of minimum length scenarios transforming one genome into another. In this paper we search instead for rearrangement scenarios that minimize the number of rearrangements whose breakpoints are unlikely due to some biological criteria. One such criterion has recently become accessible due to the advent of the Hi-C experiment, facilitating the study of 3D spacial distance between breakpoint regions. We establish a link between the minimum number of unlikely rearrangements required by a scenario and the problem of finding a maximum edge-disjoint cycle packing on a certain transformed version of the adjacency graph. This link leads to a 3/2-approximation as well as an exact integer linear programming formulation for our problem, which we prove to be NP-complete. We also present experimental results on fruit flies, showing that Hi-C data is informative when used as a criterion for rearrangements. A new variant of the weighted DCJ distance problem is addressed that ignores scenario length in its objective function. A solution to this problem provides a lower bound on the number of unlikely moves necessary when transforming one gene order into another. This lower bound aids in the study of rearrangement scenarios with respect to chromatin structure, and could eventually be used in the design of a fixed parameter algorithm with a more general objective function.

  15. ON THE THEORY AND PROCEDURE FOR CONSTRUCTING A MINIMAL-LENGTH, AREA-CONSERVING FREQUENCY POLYGON FROM GROUPED DATA.

    ERIC Educational Resources Information Center

    CASE, C. MARSTON

    THIS PAPER IS CONCERNED WITH GRAPHIC PRESENTATION AND ANALYSIS OF GROUPED OBSERVATIONS. IT PRESENTS A METHOD AND SUPPORTING THEORY FOR THE CONSTRUCTION OF AN AREA-CONSERVING, MINIMAL LENGTH FREQUENCY POLYGON CORRESPONDING TO A GIVEN HISTOGRAM. TRADITIONALLY, THE CONCEPT OF A FREQUENCY POLYGON CORRESPONDING TO A GIVEN HISTOGRAM HAS REFERRED TO THAT…

  16. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  17. Path optimization with limited sensing ability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Sung Ha, E-mail: kang@math.gatech.edu; Kim, Seong Jun, E-mail: skim396@math.gatech.edu; Zhou, Haomin, E-mail: hmzhou@math.gatech.edu

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducingmore » its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.« less

  18. The perception of minimal structures: performance on open and closed versions of visually presented Euclidean travelling salesperson problems.

    PubMed

    Vickers, Douglas; Bovet, Pierre; Lee, Michael D; Hughes, Peter

    2003-01-01

    The planar Euclidean version of the travelling salesperson problem (TSP) requires finding a tour of minimal length through a two-dimensional set of nodes. Despite the computational intractability of the TSP, people can produce rapid, near-optimal solutions to visually presented versions of such problems. To explain this, MacGregor et al (1999, Perception 28 1417-1428) have suggested that people use a global-to-local process, based on a perceptual tendency to organise stimuli into convex figures. We review the evidence for this idea and propose an alternative, local-to-global hypothesis, based on the detection of least distances between the nodes in an array. We present the results of an experiment in which we examined the relationships between three objective measures and performance measures of optimality and response uncertainty in tasks requiring participants to construct a closed tour or an open path. The data are not well accounted for by a process based on the convex hull. In contrast, results are generally consistent with a locally focused process based initially on the detection of nearest-neighbour clusters. Individual differences are interpreted in terms of a hierarchical process of constructing solutions, and the findings are related to a more general analysis of the role of nearest neighbours in the perception of structure and motion.

  19. Advantageous new conic cannula for spine cement injection.

    PubMed

    González, Sergio Gómez; Vlad, María Daniela; López, José López; Aguado, Enrique Fernández

    2014-09-01

    Experimental study to characterize the influence of the cannula geometry on both, the pressure drop and the cement flow velocity established along the cannula. To investigate how the new experimental geometry of cannulas can affect the extravertebral injection pressure and the velocity profiles established along the cannula during the injection process. Vertebroplasty procedure is being used to treat vertebral compression fractures. Vertebra infiltration is favored by the use of suitable: (1) syringes or injector devices; (2) polymer or ceramic bone cements; and (3) cannulas. However, the clinical use of ceramic bone cement has been limited due to press-filtering problems. Thus, new approaches concerning the cannula geometry are needed to minimize the press-filtering of calcium phosphate-based bone cements and thereby broaden its possible applications. Straight, conic, and combined conic-straight new cannulas with different proximal and distal both length and diameter ratios were drawn with computer-assisted design software. The new geometries were theoretically analyzed by: (1) Hagen-Poisseuille law; and (2) computational fluid dynamics. Some experimental models were manufactured and tested for extrusion in order to confirm and further advance the theoretical results. The results confirm that the totally conic cannula model, having proximal to distal diameter ratio equal 2, requires the lowest injection pressure. Furthermore, its velocity profile showed no discontinuity at all along the cannula length, compared with other known combined proximal and distal straight cannulas, where discontinuity was produced at the proximal-distal transition zone. The conclusion is that the conic cannulas: (a) further reduced the extravertebral pressure during the injection process; (b) showed optimum fluid flow velocity profiles to minimize filter-pressing problems, especially when ceramic cements are used; and (c) can be easily manufactured. In this sense, the new conic cannulas should favor the use of calcium phosphate bone cements in the spine. N/A.

  20. Finding the optimal lengths for three branches at a junction.

    PubMed

    Woldenberg, M J; Horsfield, K

    1983-09-21

    This paper presents an exact analytical solution to the problem of locating the junction point between three branches so that the sum of the total costs of the branches is minimized. When the cost per unit length of each branch is known the angles between each pair of branches can be deduced following reasoning first introduced to biology by Murray. Assuming the outer ends of each branch are fixed, the location of the junction and the length of each branch are then deduced using plane geometry and trigonometry. The model has applications in determining the optimal cost of a branch or branches at a junction. Comparing the optimal to the actual cost of a junction is a new way to compare cost models for goodness of fit to actual junction geometry. It is an unambiguous measure and is superior to comparing observed and optimal angles between each daughter and the parent branch. We present data for 199 junctions in the pulmonary arteries of two human lungs. For the branches at each junction we calculated the best fitting value of x from the relationship that flow alpha (radius)x. We found that the value of x determined whether a junction was best fitted by a surface, volume, drag or power minimization model. While economy of explanation casts doubt that four models operate simultaneously, we found that optimality may still operate, since the angle to the major daughter is less than the angle to the minor daughter. Perhaps optimality combined with a space filling branching pattern governs the branching geometry of the pulmonary artery.

  1. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  2. Length and elasticity of side reins affect rein tension at trot.

    PubMed

    Clayton, Hilary M; Larson, Britt; Kaiser, LeeAnn J; Lavagnino, Michael

    2011-06-01

    This study investigated the horse's contribution to tension in the reins. The experimental hypotheses were that tension in side reins (1) increases biphasically in each trot stride, (2) changes inversely with rein length, and (3) changes with elasticity of the reins. Eight riding horses trotted in hand at consistent speed in a straight line wearing a bit and bridle and three types of side reins (inelastic, stiff elastic, compliant elastic) were evaluated in random order at long, neutral, and short lengths. Strain gauge transducers (240 Hz) measured minimal, maximal and mean rein tension, rate of loading and impulse. The effects of rein type and length were evaluated using ANOVA with Bonferroni post hoc tests. Rein tension oscillated in a regular pattern with a peak during each diagonal stance phase. Within each rein type, minimal, maximal and mean tensions were higher with shorter reins. At neutral or short lengths, minimal tension increased and maximal tension decreased with elasticity of the reins. Short, inelastic reins had the highest maximal tension and rate of loading. Since the tension variables respond differently to rein elasticity at different lengths, it is recommended that a set of variables representing different aspects of rein tension should be reported. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. The traveling salesman problem in surgery: economy of motion for the FLS Peg Transfer task.

    PubMed

    Falcone, John L; Chen, Xiaotian; Hamad, Giselle G

    2013-05-01

    In the Peg Transfer task in the Fundamentals of Laparoscopic Surgery (FLS) curriculum, six peg objects are sequentially transferred in a bimanual fashion using laparoscopic instruments across a pegboard and back. There are over 268 trillion ways of completing this task. In the setting of many possibilities, the traveling salesman problem is one where the objective is to solve for the shortest distance traveled through a fixed number of points. The goal of this study is to apply the traveling salesman problem to find the shortest two-dimensional path length for this task. A database platform was used with permutation application output to generate all of the single-direction solutions of the FLS Peg Transfer task. A brute-force search was performed using nested Boolean operators and database equations to calculate the overall two-dimensional distances for the efficient and inefficient solutions. The solutions were found by evaluating peg object transfer distances and distances between transfers for the nondominant and dominant hands. For the 518,400 unique single-direction permutations, the mean total two-dimensional peg object travel distance was 33.3 ± 1.4 cm. The range in distances was from 30.3 to 36.5 cm. There were 1,440 (0.28 %) of 518,400 efficient solutions with the minimized peg object travel distance of 30.3 cm. There were 8 (0.0015 %) of 518,400 solutions in the final solution set that minimized the distance of peg object transfer and minimized the distance traveled between peg transfers. Peg objects moved 12.7 cm (17.4 %) less in the efficient solutions compared to the inefficient solutions. The traveling salesman problem can be applied to find efficient solutions for surgical tasks. The eight solutions to the FLS Peg Transfer task are important for any examinee taking the FLS curriculum and for certification by the American Board of Surgery.

  4. Shape optimization of self-avoiding curves

    NASA Astrophysics Data System (ADS)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  5. Rotary drum separator system

    NASA Technical Reports Server (NTRS)

    Barone, Michael R. (Inventor); Murdoch, Karen (Inventor); Scull, Timothy D. (Inventor); Fort, James H. (Inventor)

    2009-01-01

    A rotary phase separator system generally includes a step-shaped rotary drum separator (RDS) and a motor assembly. The aspect ratio of the stepped drum minimizes power for both the accumulating and pumping functions. The accumulator section of the RDS has a relatively small diameter to minimize power losses within an axial length to define significant volume for accumulation. The pumping section of the RDS has a larger diameter to increase pumping head but has a shorter axial length to minimize power losses. The motor assembly drives the RDS at a low speed for separating and accumulating and a higher speed for pumping.

  6. One-dimensional Gromov minimal filling problem

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandr O.; Tuzhilin, Alexey A.

    2012-05-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  7. Nuclear radiation problems, unmanned thermionic reactor ion propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Mondt, J. F.; Sawyer, C. D.; Nakashima, A.

    1972-01-01

    A nuclear thermionic reactor as the electric power source for an electric propulsion spacecraft introduces a nuclear radiation environment that affects the spacecraft configuration, the use and location of electrical insulators and the science experiments. The spacecraft is conceptually configured to minimize the nuclear shield weight by: (1) a large length to diameter spacecraft; (2) eliminating piping penetrations through the shield; and (3) using the mercury propellant as gamma shield. Since the alumina material is damaged by the high nuclear radiation environment in the reactor it is desirable to locate the alumina insulator outside the reflector or develop a more radiation resistant insulator.

  8. Use of implantable prostheses for the treatment of urinary incontinence and impotence.

    PubMed

    Kaufman, J J; Raz, S

    1975-08-01

    Silicone-Silastic implants to restore continence and potency have been used in one hundred twenty and twenty-five patients, respectively, and in eight patients a combined anti-impotence and anti-incontinence operation has been performed. The results have been gratifying, the complication rate has been minimal with fewer than five patients in our series having infection and a draining perineal sinus after the incontinence implant, and in no patient have delayed problems with the penile implants developed. Because of the design of the penile implants, fracture is extremely unlikely to occur, and the rods can be replaced if necessary because of inadequate length or asymmetry.

  9. Black hole complementarity with the generalized uncertainty principle in Gravity's Rainbow

    NASA Astrophysics Data System (ADS)

    Gim, Yongwan; Um, Hwajin; Kim, Wontae

    2018-02-01

    When gravitation is combined with quantum theory, the Heisenberg uncertainty principle could be extended to the generalized uncertainty principle accompanying a minimal length. To see how the generalized uncertainty principle works in the context of black hole complementarity, we calculate the required energy to duplicate information for the Schwarzschild black hole. It shows that the duplication of information is not allowed and black hole complementarity is still valid even assuming the generalized uncertainty principle. On the other hand, the generalized uncertainty principle with the minimal length could lead to a modification of the conventional dispersion relation in light of Gravity's Rainbow, where the minimal length is also invariant as well as the speed of light. Revisiting the gedanken experiment, we show that the no-cloning theorem for black hole complementarity can be made valid in the regime of Gravity's Rainbow on a certain combination of parameters.

  10. Minimum length Pb/SCIN detector for efficient cosmic ray identification

    NASA Technical Reports Server (NTRS)

    Snyder, H. David

    1989-01-01

    A study was made of the performance of a minimal length cosmic ray shower detector that would be light enough for space flight and would provide efficient identification of positions and protons. Cosmic ray positions are mainly produced in the decay chain of: Pion yields Muon yields Positron and they provide a measure of the matter density traversed by primary protons. Present positron flux measurements are consistent with the Leaky Box and Halo models for sources of cosmic rays. Abundant protons in the space environment are a significant source of background that would wash out the positron signal. Protons and positrons produced very distictive showers of particles when they enter matter; many studies have been published on their behavior on large calorimeter detectors. The challenge is to determine the minimal material necessary (minimal calorimeter depth) for positive particles identification. The primary instrument for the investigation is the Monte Carlo code GEANT, a library of programs from CERN that can be used to model experimental geometry, detector responses and particle interaction processes. The use of the Monte Carlo approach is crucial since statistical fluctuations in shower shape are significant. Studies conducted during the 1988 summer program showed that straightforward approaches to the problem achieved 85 to 90 percent correct identification, but left a residue of 10 to 15 percent misidentified particles. This percentage improved to a few percent when multiple shower-cut criteria were applied to the data. This summer, the same study was extended to employ several physical and statistical methods of identifying response of the calorimeter and the efficiency of the optimal shower cuts to off-normal incidence particle was determined.

  11. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  12. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  13. Optimal Paths in Gliding Flight

    NASA Astrophysics Data System (ADS)

    Wolek, Artur

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  14. Comparison of transtibial amputee and non-amputee biomechanics during a common turning task.

    PubMed

    Segal, Ava D; Orendurff, Michael S; Czerniecki, Joseph M; Schoen, Jason; Klute, Glenn K

    2011-01-01

    The biomechanics of amputee turning gait has been minimally studied, in spite of its integral relationship with the more complex gait required for household or community ambulation. This study compares the biomechanics of unilateral transtibial amputees and non-amputees completing a common turning task. Full body gait analysis was completed for subjects walking at comparable self-selected speeds around a 1m radius circular path. Peak internal and external rotation moments of the hip, knee and ankle, mediolateral ground reaction impulse (ML GRI), peak effective limb length, and stride length were compared across conditions (non-amputee, amputee prosthetic limb, amputee sound limb). Amputees showed decreased internal rotation moments at the prosthetic limb hip and knee compared to non-amputees, perhaps as a protective mechanism to minimize stress on the residual limb. There was also an increase in amputee sound limb hip external rotation moment in early stance compared to non-amputees, which may be a compensation for the decrease in prosthetic limb internal rotation moment during late stance of the prior step. ML GRI was decreased for the amputee inside limb compared to non-amputee, possibly to minimize the body's acceleration in the direction of the turn. Amputees also exhibited a shorter inside limb stride length compared to non-amputees. Both decreased ML GRI and stride length indicate a COM that is more centered over the base of support, which may minimize the risk of falling. Finally, a longer effective limb length was found for the amputee inside limb turning, possibly due to excessive trunk shift. Published by Elsevier B.V.

  15. Percutaneous Repair Technique for Acute Achilles Tendon Rupture with Assistance of Kirschner Wire.

    PubMed

    He, Ze-yang; Chai, Ming-xiang; Liu, Yue-ju; Zhang, Xiao-ran; Zhang, Tao; Song, Lian-xin; Ren, Zhi-xin; Wu, Xi-rui

    2015-11-01

    The aim of this study is to introduce a self-designed, minimally invasive technique for repairing an acute Achilles tendon rupture percutaneously. Comparing with the traditional open repair, the new technique provides obvious advantages of minimized operation-related lesions, fewer wound complications as well as a higher healing rate. However, a percutaneous technique without direct vision may be criticized by its insufficient anastomosis of Achilles tendon and may also lead to the lengthening of the Achilles tendon and a reduction in the strength of the gastrocnemius. To address the potential problems, we have improved our technique using a percutaneous Kirschner wire leverage process before suturing, which can effectively recover the length of the Achilles tendon and ensure the broken ends are in tight contact. With this improvement in technique, we have great confidence that it will become the treatment of choice for acute Achilles tendon ruptures. © 2015 Chinese Orthopaedic Association and Wiley Publishing Asia Pty Ltd.

  16. Optimization of municipal solid waste collection and transportation routes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Swapan, E-mail: swapan2009sajal@gmail.com; Bhattacharyya, Bidyut Kr., E-mail: bidyut53@yahoo.co.in

    2015-09-15

    Graphical abstract: Display Omitted - Highlights: • Profitable integrated solid waste management system. • Optimal municipal waste collection scheme between the sources and waste collection centres. • Optimal path calculation between waste collection centres and transfer stations. • Optimal waste routing between the transfer stations and processing plants. - Abstract: Optimization of municipal solid waste (MSW) collection and transportation through source separation becomes one of the major concerns in the MSW management system design, due to the fact that the existing MSW management systems suffer by the high collection and transportation cost. Generally, in a city different waste sources scattermore » throughout the city in heterogeneous way that increase waste collection and transportation cost in the waste management system. Therefore, a shortest waste collection and transportation strategy can effectively reduce waste collection and transportation cost. In this paper, we propose an optimal MSW collection and transportation scheme that focus on the problem of minimizing the length of each waste collection and transportation route. We first formulize the MSW collection and transportation problem into a mixed integer program. Moreover, we propose a heuristic solution for the waste collection and transportation problem that can provide an optimal way for waste collection and transportation. Extensive simulations and real testbed results show that the proposed solution can significantly improve the MSW performance. Results show that the proposed scheme is able to reduce more than 30% of the total waste collection path length.« less

  17. A best on-line algorithm for single machine scheduling the equal length jobs with the special chain precedence and delivery time

    NASA Astrophysics Data System (ADS)

    Gu, Cunchang; Mu, Yundong

    2013-03-01

    In this paper, we consider a single machine on-line scheduling problem with the special chains precedence and delivery time. All jobs arrive over time. The chains chainsi arrive at time ri , it is known that the processing and delivery time of each job on the chain satisfy one special condition CD a forehand: if the job J(i)j is the predecessor of the job J(i)k on the chain chaini, then they satisfy p(i)j = p(i)k = p >= qj >= qk , i = 1,2, ---,n , where pj and qj denote the processing time and the delivery time of the job Jj respectively. Obviously, if the arrival jobs have no chains precedence, it shows that the length of the corresponding chain is 1. The objective is to minimize the time by which all jobs have been delivered. We provide an on-line algorithm with a competitive ratio of √2 , and the result is the best possible.

  18. Minimization of the Effects of Secondary Reactions on Turbine Film Cooling in a Fuel Rich Environment

    DTIC Science & Technology

    2014-06-02

    the instrumentation block ..................................... 57 Table 4.1: Flame Length Results...91 Table 4.2: Five Row Flame Lengths , Blowing Ratio Sweep .......................................... 123 Table...4.3: Five Row Flame Lengths , Equivalence Ratio Sweep .................................... 124 Table 4.4: Five Row - Wall Absorption Parameter

  19. Modeling of electrical and mesoscopic circuits at quantum nanoscale from heat momentum operator

    NASA Astrophysics Data System (ADS)

    El-Nabulsi, Rami Ahmad

    2018-04-01

    We develop a new method to study electrical circuits at quantum nanoscale by introducing a heat momentum operator which reproduces quantum effects similar to those obtained in Suykens's nonlocal-in-time kinetic energy approach for the case of reversible motion. The series expansion of the heat momentum operator is similar to the momentum operator obtained in the framework of minimal length phenomenologies characterized by the deformation of Heisenberg algebra. The quantization of both LC and mesoscopic circuits revealed a number of motivating features like the emergence of a generalized uncertainty relation and a minimal charge similar to those obtained in the framework of minimal length theories. Additional features were obtained and discussed accordingly.

  20. Computer game as a tool for training the identification of phonemic length.

    PubMed

    Pennala, Riitta; Richardson, Ulla; Ylinen, Sari; Lyytinen, Heikki; Martin, Maisa

    2014-12-01

    Computer-assisted training of Finnish phonemic length was conducted with 7-year-old Russian-speaking second-language learners of Finnish. Phonemic length plays a different role in these two languages. The training included game activities with two- and three-syllable word and pseudo-word minimal pairs with prototypical vowel durations. The lowest accuracy scores were recorded for two-syllable words. Accuracy scores were higher for the minimal pairs with larger rather than smaller differences in duration. Accuracy scores were lower for long duration than for short duration. The ability to identify quantity degree was generalized to stimuli used in the identification test in two of the children. Ideas for improving the game are introduced.

  1. Penile reconstruction with bilateral superficial circumflex iliac artery perforator (SCIP) flaps.

    PubMed

    Koshima, Isao; Nanba, Yuzaburo; Nagai, Atsushi; Nakatsuka, Mikiya; Sato, Toshiki; Kuroda, Shigetosi

    2006-04-01

    The free radial forearm flap is a very common material for penile reconstruction. Its major problems are donor-site morbidity with large depressive scar after skin grafting, urethral fistula due to insufficiency of suture line for the urethra, and need for microvascular anastomosis. A new method using combined bilateral island SCIP flaps for the urethra and penis is developed for gender identity disorder (GID) patients. The advantages of this method are minimal donor-site morbidity with a concealed donor scar, and possible one-stage reconstruction for a longer urethra of 22 cm in length without insufficiency, even for GID female-to-male patients. A disadvantage is poor sensory recovery.

  2. Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy

    NASA Astrophysics Data System (ADS)

    Ajdari, Ali; Ghate, Archis; Kim, Minsun

    2018-04-01

    Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.

  3. Energy Expenditure of Trotting Gait Under Different Gait Parameters

    NASA Astrophysics Data System (ADS)

    Chen, Xian-Bao; Gao, Feng

    2017-07-01

    Robots driven by batteries are clean, quiet, and can work indoors or in space. However, the battery endurance is a great problem. A new gait parameter design energy saving strategy to extend the working hours of the quadruped robot is proposed. A dynamic model of the robot is established to estimate and analyze the energy expenditures during trotting. Given a trotting speed, optimal stride frequency and stride length can minimize the energy expenditure. However, the relationship between the speed and the optimal gait parameters is nonlinear, which is difficult for practical application. Therefore, a simplified gait parameter design method for energy saving is proposed. A critical trotting speed of the quadruped robot is found and can be used to decide the gait parameters. When the robot is travelling lower than this speed, it is better to keep a constant stride length and change the cycle period. When the robot is travelling higher than this speed, it is better to keep a constant cycle period and change the stride length. Simulations and experiments on the quadruped robot show that by using the proposed gait parameter design approach, the energy expenditure can be reduced by about 54% compared with the 100 mm stride length under 500 mm/s speed. In general, an energy expenditure model based on the gait parameter of the quadruped robot is built and the trotting gait parameters design approach for energy saving is proposed.

  4. The optimum spanning catenary cable

    NASA Astrophysics Data System (ADS)

    Wang, C. Y.

    2015-03-01

    A heavy cable spans two points in space. There exists an optimum cable length such that the maximum tension is minimized. If the two end points are at the same level, the optimum length is 1.258 times the distance between the ends. The optimum lengths for end points of different heights are also found.

  5. Explaining the length threshold of polyglutamine aggregation

    NASA Astrophysics Data System (ADS)

    De Los Rios, Paolo; Hafner, Marc; Pastore, Annalisa

    2012-06-01

    The existence of a length threshold, of about 35 residues, above which polyglutamine repeats can give rise to aggregation and to pathologies, is one of the hallmarks of polyglutamine neurodegenerative diseases such as Huntington’s disease. The reason why such a minimal length exists at all has remained one of the main open issues in research on the molecular origins of such classes of diseases. Following the seminal proposals of Perutz, most research has focused on the hunt for a special structure, attainable only above the minimal length, able to trigger aggregation. Such a structure has remained elusive and there is growing evidence that it might not exist at all. Here we review some basic polymer and statistical physics facts and show that the existence of a threshold is compatible with the modulation that the repeat length imposes on the association and dissociation rates of polyglutamine polypeptides to and from oligomers. In particular, their dramatically different functional dependence on the length rationalizes the very presence of a threshold and hints at the cellular processes that might be at play, in vivo, to prevent aggregation and the consequent onset of the disease.

  6. Algorithms for Heterogeneous, Multiple Depot, Multiple Unmanned Vehicle Path Planning Problems

    DOE PAGES

    Sundar, Kaarthik; Rathinam, Sivakumar

    2016-12-26

    Unmanned vehicles, both aerial and ground, are being used in several monitoring applications to collect data from a set of targets. This article addresses a problem where a group of heterogeneous aerial or ground vehicles with different motion constraints located at distinct depots visit a set of targets. The vehicles also may be equipped with different sensors, and therefore, a target may not be visited by any vehicle. The objective is to find an optimal path for each vehicle starting and ending at its respective depot such that each target is visited at least once by some vehicle, the vehicle–targetmore » constraints are satisfied, and the sum of the length of the paths for all the vehicles is minimized. Two variants of this problem are formulated (one for ground vehicles and another for aerial vehicles) as mixed-integer linear programs and a branchand- cut algorithm is developed to compute an optimal solution to each of the variants. Computational results show that optimal solutions for problems involving 100 targets and 5 vehicles can be obtained within 300 seconds on average, further corroborating the effectiveness of the proposed approach.« less

  7. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method

    PubMed Central

    Tang, Liang; Zhu, Yongfeng; Fu, Qiang

    2017-01-01

    Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity. PMID:28468308

  8. Designing Waveform Sets with Good Correlation and Stopband Properties for MIMO Radar via the Gradient-Based Method.

    PubMed

    Tang, Liang; Zhu, Yongfeng; Fu, Qiang

    2017-05-01

    Waveform sets with good correlation and/or stopband properties have received extensive attention and been widely used in multiple-input multiple-output (MIMO) radar. In this paper, we aim at designing unimodular waveform sets with good correlation and stopband properties. To formulate the problem, we construct two criteria to measure the correlation and stopband properties and then establish an unconstrained problem in the frequency domain. After deducing the phase gradient and the step size, an efficient gradient-based algorithm with monotonicity is proposed to minimize the objective function directly. For the design problem without considering the correlation weights, we develop a simplified algorithm, which only requires a few fast Fourier transform (FFT) operations and is more efficient. Because both of the algorithms can be implemented via the FFT operations and the Hadamard product, they are computationally efficient and can be used to design waveform sets with a large waveform number and waveform length. Numerical experiments show that the proposed algorithms can provide better performance than the state-of-the-art algorithms in terms of the computational complexity.

  9. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  10. Assembling Precise Truss Structures With Minimal Stresses

    NASA Technical Reports Server (NTRS)

    Sword, Lee F.

    1996-01-01

    Improved method of assembling precise truss structures involves use of simple devices. Tapered pins that fit in tapered holes indicate deviations from prescribed lengths. Method both helps to ensure precision of finished structures and minimizes residual stresses within structures.

  11. Thermal Stability of Al2O3/Silicone Composites as High-Temperature Encapsulants

    NASA Astrophysics Data System (ADS)

    Yao, Yiying

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  12. Action-minimizing solutions of the one-dimensional N-body problem

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Zhang, Shiqing

    2018-05-01

    We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.

  13. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  14. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  15. Power Allocation Based on Data Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Houlian; Zhou, Gongbo

    2017-01-01

    Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy. PMID:28498346

  16. Optimizing communication satellites payload configuration with exact approaches

    NASA Astrophysics Data System (ADS)

    Stathakis, Apostolos; Danoy, Grégoire; Bouvry, Pascal; Talbi, El-Ghazali; Morelli, Gianluigi

    2015-12-01

    The satellite communications market is competitive and rapidly evolving. The payload, which is in charge of applying frequency conversion and amplification to the signals received from Earth before their retransmission, is made of various components. These include reconfigurable switches that permit the re-routing of signals based on market demand or because of some hardware failure. In order to meet modern requirements, the size and the complexity of current communication payloads are increasing significantly. Consequently, the optimal payload configuration, which was previously done manually by the engineers with the use of computerized schematics, is now becoming a difficult and time consuming task. Efficient optimization techniques are therefore required to find the optimal set(s) of switch positions to optimize some operational objective(s). In order to tackle this challenging problem for the satellite industry, this work proposes two Integer Linear Programming (ILP) models. The first one is single-objective and focuses on the minimization of the length of the longest channel path, while the second one is bi-objective and additionally aims at minimizing the number of switch changes in the payload switch matrix. Experiments are conducted on a large set of instances of realistic payload sizes using the CPLEX® solver and two well-known exact multi-objective algorithms. Numerical results demonstrate the efficiency and limitations of the ILP approach on this real-world problem.

  17. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  18. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  19. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  20. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  1. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  2. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  3. Combining gait optimization with passive system to increase the energy efficiency of a humanoid robot walking movement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José

    There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less

  4. Optimal routing and buffer allocation for a class of finite capacity queueing systems

    NASA Technical Reports Server (NTRS)

    Towsley, Don; Sparaggis, Panayotis D.; Cassandras, Christos G.

    1992-01-01

    The problem of routing jobs to K parallel queues with identical exponential servers and unequal finite buffer capacities is considered. Routing decisions are taken by a controller which has buffering space available to it and may delay routing of a customer to a queue. Using ideas from weak majorization, it is shown that the shorter nonfull queue delayed (SNQD) policy minimizes both the total number of customers in the system at any time and the number of customers that are rejected by that time. The SNQD policy always delays routing decisions as long as all servers are busy. Only when all the buffers at the controller are occupied is a customer routed to the queue with the shortest queue length that is not at capacity. Moreover, it is shown that, if a fixed number of buffers is to be distributed among the K queues, then the optimal allocation scheme is the one in which the difference between the maximum and minimum queue capacities is minimized, i.e., becomes either 0 or 1.

  5. Enhancing plant productivity while suppressing biofilm growth in a windowfarm system using beneficial bacteria and ultraviolet irradiation.

    PubMed

    Lee, Seungjun; Ge, Chongtao; Bohrerova, Zuzana; Grewal, Parwinder S; Lee, Jiyoung

    2015-07-01

    Common problems in a windowfarm system (a vertical and indoor hydroponic system) are phytopathogen infections in plants and excessive buildup of biofilms. The objectives of this study were (i) to promote plant health by making plants more resistant to infection by using beneficial biosurfactant-producing Pseudomonas chlororaphis around the roots and (ii) to minimize biofilm buildup by ultraviolet (UV) irradiation of the water reservoir, thereby extending the lifespan of the whole system with minimal maintenance. Pseudomonas chlororaphis-treated lettuce grew significantly better than nontreated lettuce, as indicated by enhancement of color, mass, length, and number of leaves per head (p < 0.05). The death rate of the lettuce was reduced by ∼ 50% when the lettuce was treated with P. chlororaphis. UV irradiation reduced the bacteria (4 log reduction) and algae (4 log reduction) in the water reservoirs and water tubing systems. Introduction of P. chlororaphis into the system promoted plant growth and reduced damage caused by the plant pathogen Pythium ultimum. UV irradiation of the water reservoir reduced algal and biofilm growth and extended the lifespan of the system.

  6. The classical limit of minimal length uncertainty relation: revisit with the Hamilton-Jacobi method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Xiaobo; Wang, Peng; Yang, Haitang, E-mail: guoxiaobo@swust.edu.cn, E-mail: pengw@scu.edu.cn, E-mail: hyanga@scu.edu.cn

    2016-05-01

    The existence of a minimum measurable length could deform not only the standard quantum mechanics but also classical physics. The effects of the minimal length on classical orbits of particles in a gravitation field have been investigated before, using the deformed Poisson bracket or Schwarzschild metric. In this paper, we first use the Hamilton-Jacobi method to derive the deformed equations of motion in the context of Newtonian mechanics and general relativity. We then employ them to study the precession of planetary orbits, deflection of light, and time delay in radar propagation. We also set limits on the deformation parameter bymore » comparing our results with the observational measurements. Finally, comparison with results from previous papers is given at the end of this paper.« less

  7. The Benefit of Modified Rehabilitation and Minimally Invasive Techniques in Total Hip Replacement

    PubMed Central

    Lilikakis, Anastasios K; Gillespie, Beryl; Villar, Richard N

    2008-01-01

    INTRODUCTION We wished to assess if an intensive rehabilitation regimen alone, or one combined with modified anaesthetic and surgical techniques, can change the speed of rehabilitation or the length of hospital stay after total hip replacement. PATIENTS AND METHODS We compared 44 patients who had followed a traditional care pathway, with 38 patients who had rehabilitated under a new rehabilitation protocol, with 40 patients who had also received modified, minimally invasive techniques. The speed of rehabilitation was measured in terms of three specific milestones accomplished on the day after surgery. RESULTS We found a statistically significant improvement in the day after surgery each activity was possible. The length of hospital stay was reduced from 6.5 days to 5.4 days to 4.1 days, a difference which was also statistically significant. CONCLUSIONS The data support the view that a new rehabilitation protocol alone can reduce the length of hospital stay and hasten rehabilitation. The combination of modified anaesthetic and minimally invasive surgical techniques with the new rehabilitation regimen can further improve short-term outcome after total hip replacement. PMID:18634739

  8. Analysis of multipath channel fading techniques in wireless communication systems

    NASA Astrophysics Data System (ADS)

    Mahender, Kommabatla; Kumar, Tipparti Anil; Ramesh, K. S.

    2018-04-01

    Multipath fading occurs in any environment where there is multipath propagation and there is some movement of elements within the radio communications system. This may include the radio transmitter or receiver position, or in the elements that give rise to the reflections. The multipath fading can often be relatively deep, i.e. the signals fade completely away, whereas at other times the fading may not cause the signal to fall below a useable strength. Multipath fading may also cause distortion to the radio signal. As the various paths that can be taken by the signals vary in length, the signal transmitted at a particular instance will arrive at the receiver over a spread of times. This can cause problems with phase distortion and inter symbol interference when data transmissions are made. As a result, it may be necessary to incorporate features within the radio communications system that enables the effects of these problems to be minimized. This paper analyses the effects of various types of multipath fading in wireless transmission system.

  9. Placement of clock gates in time-of-flight optoelectronic circuits

    NASA Astrophysics Data System (ADS)

    Feehrer, John R.; Jordan, Harry F.

    1995-12-01

    Time-of-flight synchronized optoelectronic circuits capitalize on the highly controllable delays of optical waveguides. Circuits have no latches; synchronization is achieved by adjustment of the lengths of waveguides that connect circuit elements. Clock gating and pulse stretching are used to restore timing and power. A functional circuit requires that every feedback loop contain at least one clock gate to prevent cumulative timing drift and power loss. A designer specifies an ideal circuit, which contains no or very few clock gates. To make the circuit functional, we must identify locations in which to place clock gates. Because clock gates are expensive, add area, and increase delay, a minimal set of locations is desired. We cast this problem in graph-theoretical form as the minimum feedback edge set problem and solve it by using an adaptation of an algorithm proposed in 1966 [IEEE Trans. Circuit Theory CT-13, 399 (1966)]. We discuss a computer-aided-design implementation of the algorithm that reduces computational complexity and demonstrate it on a set of circuits.

  10. Survey Page Length and Progress Indicators: What Are Their Relationships to Item Nonresponse?

    ERIC Educational Resources Information Center

    Bowman, Nicholas A.; Herzog, Serge; Sarraf, Shimon; Tukibayeva, Malika

    2014-01-01

    The popularity of online student surveys has been associated with greater item nonresponse. This chapter presents research aimed at exploring what factors might help minimize item nonresponse, such as altering online survey page length and using progress indicators.

  11. Randomizer for High Data Rates

    NASA Technical Reports Server (NTRS)

    Garon, Howard; Sank, Victor J.

    2018-01-01

    NASA as well as a number of other space agencies now recognize that the current recommended CCSDS randomizer used for telemetry (TM) is too short. When multiple applications of the PN8 Maximal Length Sequence (MLS) are required in order to fully cover a channel access data unit (CADU), spectral problems in the form of elevated spurious discretes (spurs) appear. Originally the randomizer was called a bit transition generator (BTG) precisely because it was thought that its primary value was to insure sufficient bit transitions to allow the bit/symbol synchronizer to lock and remain locked. We, NASA, have shown that the old BTG concept is a limited view of the real value of the randomizer sequence and that the randomizer also aids in signal acquisition as well as minimizing the potential for false decoder lock. Under the guidelines we considered here there are multiple maximal length sequences under GF(2) which appear attractive in this application. Although there may be mitigating reasons why another MLS sequence could be selected, one sequence in particular possesses a combination of desired properties which offsets it from the others.

  12. Nanowire decorated, ultra-thin, single crystalline silicon for photovoltaic devices.

    PubMed

    Aurang, Pantea; Turan, Rasit; Unalan, Husnu Emrah

    2017-10-06

    Reducing silicon (Si) wafer thickness in the photovoltaic industry has always been demanded for lowering the overall cost. Further benefits such as short collection lengths and improved open circuit voltages can also be achieved by Si thickness reduction. However, the problem with thin films is poor light absorption. One way to decrease optical losses in photovoltaic devices is to minimize the front side reflection. This approach can be applied to front contacted ultra-thin crystalline Si solar cells to increase the light absorption. In this work, homojunction solar cells were fabricated using ultra-thin and flexible single crystal Si wafers. A metal assisted chemical etching method was used for the nanowire (NW) texturization of ultra-thin Si wafers to compensate weak light absorption. A relative improvement of 56% in the reflectivity was observed for ultra-thin Si wafers with the thickness of 20 ± 0.2 μm upon NW texturization. NW length and top contact optimization resulted in a relative enhancement of 23% ± 5% in photovoltaic conversion efficiency.

  13. Planning energy-efficient bipedal locomotion on patterned terrain

    NASA Astrophysics Data System (ADS)

    Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad

    2016-05-01

    Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.

  14. A minimal dissipation type-based classification in irreversible thermodynamics and microeconomics

    NASA Astrophysics Data System (ADS)

    Tsirlin, A. M.; Kazakov, V.; Kolinko, N. A.

    2003-10-01

    We formulate the problem of finding classes of kinetic dependencies in irreversible thermodynamic and microeconomic systems for which minimal dissipation processes belong to the same type. We show that this problem is an inverse optimal control problem and solve it. The commonality of this problem in irreversible thermodynamics and microeconomics is emphasized.

  15. Minimization In Digital Design As A Meta-Planning Problem

    NASA Astrophysics Data System (ADS)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  16. Corrected black hole thermodynamics in Damour-Ruffini’s method with generalized uncertainty principle

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Chen, Ge-Rui

    Recently, some approaches to quantum gravity indicate that a minimal measurable length lp ˜ 10-35 should be considered, a direct implication of the minimal measurable length is the generalized uncertainty principle (GUP). Taking the effect of GUP into account, Hawking radiation of massless scalar particles from a Schwarzschild black hole is investigated by the use of Damour-Ruffini’s method. The original Klein-Gordon equation is modified. It is obtained that the corrected Hawking temperature is related to the energy of emitting particles. Some discussions appear in the last section.

  17. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  18. Enhancing quantum annealing performance for the molecular similarity problem

    NASA Astrophysics Data System (ADS)

    Hernandez, Maritza; Aramon, Maliheh

    2017-05-01

    Quantum annealing is a promising technique which leverages quantum mechanics to solve hard optimization problems. Considerable progress has been made in the development of a physical quantum annealer, motivating the study of methods to enhance the efficiency of such a solver. In this work, we present a quantum annealing approach to measure similarity among molecular structures. Implementing real-world problems on a quantum annealer is challenging due to hardware limitations such as sparse connectivity, intrinsic control error, and limited precision. In order to overcome the limited connectivity, a problem must be reformulated using minor-embedding techniques. Using a real data set, we investigate the performance of a quantum annealer in solving the molecular similarity problem. We provide experimental evidence that common practices for embedding can be replaced by new alternatives which mitigate some of the hardware limitations and enhance its performance. Common practices for embedding include minimizing either the number of qubits or the chain length and determining the strength of ferromagnetic couplers empirically. We show that current criteria for selecting an embedding do not improve the hardware's performance for the molecular similarity problem. Furthermore, we use a theoretical approach to determine the strength of ferromagnetic couplers. Such an approach removes the computational burden of the current empirical approaches and also results in hardware solutions that can benefit from simple local classical improvement. Although our results are limited to the problems considered here, they can be generalized to guide future benchmarking studies.

  19. The crack and wedging problem for an orthotropic strip

    NASA Technical Reports Server (NTRS)

    Cinar, A.; Erdogan, F.

    1982-01-01

    The plane elasticity problem for an orthotropic strip containing a crack parallel to its boundaries is considered. The problem is formulated under general mixed mode loading conditions. The stress intensity factors depend on two dimensionless orthotropic constants only. For the crack problem the results are given for a single crack and two collinear cracks. The calculated results show that of the two orthotropic constants the influence of the stiffness ratio on the stress intensity factors is much more significant than that of the shear parameter. The problem of loading the strip by a rigid rectangular lengths continuous contact is maintained along the wedge strip interface; at a certain critical wedge length the separation starts at the midsection of the wedge, and the length of the separation zone increases rapidly with increasing wedge length.

  20. Hybrid minimally invasive esophagectomy for cancer: impact on postoperative inflammatory and nutritional status.

    PubMed

    Scarpa, M; Cavallin, F; Saadeh, L M; Pinto, E; Alfieri, R; Cagol, M; Da Roit, A; Pizzolato, E; Noaro, G; Pozza, G; Castoro, C

    2016-11-01

    The purpose of this case-control study was to evaluate the impact of hybrid minimally invasive esophagectomy for cancer on surgical stress response and nutritional status. All 34 consecutive patients undergoing hybrid minimally invasive esophagectomy for cancer at our surgical unit between 2008 and 2013 were retrospectively compared with 34 patients undergoing esophagectomy with open gastric tubulization (open), matched for neoadjuvant therapy, pathological stage, gender and age. Demographic data, tumor features and postoperative course (including quality of life and systemic inflammatory and nutritional status) were compared. Postoperative course was similar in terms of complication rate. Length of stay in intensive care unit was shorter in patients undergoing hybrid minimally invasive esophagectomy (P = 0.002). In the first postoperative day, patients undergoing hybrid minimally invasive esophagectomy had lower C-reactive protein levels (P = 0.001) and white cell blood count (P = 0.05), and higher albumin serum level (P = 0.001). In this group, albumin remained higher also at third (P = 0.06) and seventh (P = 0.008) postoperative day, and C-reactive protein resulted lower at third post day (P = 0.04). Hybrid minimally invasive esophagectomy significantly improved the systemic inflammatory and catabolic response to surgical trauma, contributing to a shorter length of stay in intensive care unit. © 2015 International Society for Diseases of the Esophagus.

  1. Time estimation as a secondary task to measure workload: Summary of research

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Mcpherson, D.; Loomis, L. L.

    1978-01-01

    Actively produced intervals of time were found to increase in length and variability, whereas retrospectively produced intervals decreased in length although they also increased in variability with the addition of a variety of flight-related tasks. If pilots counted aloud while making a production, however, the impact of concurrent activity was minimized, at least for the moderately demanding primary tasks that were selected. The effects of feedback on estimation accuracy and consistency were greatly enhanced if a counting or tapping production technique was used. This compares with the minimal effect that feedback had when no overt timekeeping technique was used. Actively made verbal estimates of sessions filled with different activities performed during the interval were increased. Retrospectively made verbal estimates, however, increased in length as the amount and complexity of activities performed during the interval were increased.

  2. Influence of a source line position on results of EM observations applied to the diagnostics of underground heating system pipelines in urban area

    NASA Astrophysics Data System (ADS)

    Vetrov, A.

    2009-05-01

    The condition of underground constructions, communication and supply systems in the cities has to be periodically monitored and controlled in order to prevent their breakage, which can result in serious accident, especially in urban area. The most risk of damage have the underground construction made of steal such as pipelines widely used for water, gas and heat supply. To ensure the pipeline survivability it is necessary to carry out the operative and inexpensive control of pipelines condition. Induced electromagnetic methods of geophysics can be applied to provide such diagnostics. The highly developed surface in urbane area is one of cause hampering the realization of electromagnetic methods of diagnostics. The main problem is in finding of an appropriate place for the source line and electrodes on a limited surface area and their optimal position relative to the observation path to minimize their influence on observed data. Author made a number of experiments of an underground heating system pipeline diagnostics using different position of the source line and electrodes. The experiments were made on a 200 meters section over 2 meters deep pipeline. The admissible length of the source line and angle between the source line and the observation path were determined. The minimal length of the source line for the experiment conditions and accuracy made 30 meters, the maximum admissible angle departure from the perpendicular position made 30 degrees. The work was undertaken in cooperation with diagnostics company DIsSO, Saint-Petersburg, Russia.

  3. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  4. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  5. New Polyazine-Bridged RuII,RhIII and RuII,RhI Supramolecular Photocatalysts for Water Reduction to Hydrogen Applicable for Solar Energy Conversion and Mechanistic Investigation of the Photocatalytic Cycle

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  6. Minimally Invasive Surgical Pulmonary Embolectomy: A Potential Alternative to Conventional Sternotomy.

    PubMed

    Pasrija, Chetan; Shah, Aakash; Sultanik, Elliot; Rouse, Michael; Ghoreishi, Mehrdad; Bittle, Gregory J; Boulos, Francesca; Griffith, Bartley P; Kon, Zachary N

    Surgical pulmonary embolectomy has gained increasing popularity over the past decade with multiple series reporting excellent outcomes in the treatment of submassive pulmonary embolism. However, a significant barrier to the broader adoption of surgical pulmonary embolectomy remains the large incision and long recovery after a full sternotomy. We report the safety and efficacy of using a minimally invasive approach to surgical pulmonary embolectomy. All consecutive patients undergoing surgical pulmonary embolectomy for a submassive pulmonary embolism (2015-2017) were reviewed. Patients were stratified as conventional or minimally invasive. The minimally invasive approach included a 5- to 7-cm skin incision with upper hemisternotomy to the third intercostal space. The primary outcomes were in-hospital and 90-day survival. Thirty patients (conventional = 20, minimally invasive = 10) were identified. Operative time was similar between the two groups, but cardiopulmonary bypass time was significantly longer in the minimally invasive group (58 vs 94 minutes, P = 0.04). While ventilator time and intensive care unit length of stay were similar between groups, hospital length of stay was 4.5 days shorter in the minimally invasive group, and there was a trend toward less blood product use. In-hospital and 90-day survival was 100%. Within the minimally invasive cohort, median right ventricular dysfunction at discharge was none-mild and no patient experienced postoperative renal failure, deep sternal wound infection, sepsis, or stroke. Minimally invasive surgical pulmonary embolectomy appears to be a feasible approach in the treatment of patients with a submassive pulmonary embolism. A larger, prospective analysis comparing this modality with conventional surgical pulmonary embolectomy may be warranted.

  7. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  8. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  9. Blow-up behavior of ground states for a nonlinear Schrödinger system with attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Zeng, Xiaoyu; Zhou, Huan-Song

    2018-01-01

    We consider a nonlinear Schrödinger system arising in a two-component Bose-Einstein condensate (BEC) with attractive intraspecies interactions and repulsive interspecies interactions in R2. We get ground states of this system by solving a constrained minimization problem. For some kinds of trapping potentials, we prove that the minimization problem has a minimizer if and only if the attractive interaction strength ai (i = 1 , 2) of each component of the BEC system is strictly less than a threshold a*. Furthermore, as (a1 ,a2) ↗ (a* ,a*), the asymptotical behavior for the minimizers of the minimization problem is discussed. Our results show that each component of the BEC system concentrates at a global minimum of the associated trapping potential.

  10. Steiner trees and spanning trees in six-pin soap films

    NASA Astrophysics Data System (ADS)

    Dutta, Prasun; Khastgir, S. Pratik; Roy, Anushree

    2010-02-01

    The problem of finding minimum (local as well as absolute) path lengths joining given points (or terminals) on a plane is known as the Steiner problem. The Steiner problem arises in finding the minimum total road length joining several towns and cities. We study the Steiner tree problem using six-pin soap films. Experimentally, we observe spanning trees as well as Steiner trees partly by varying the pin diameter. We propose a possibly exact expression for the length of a spanning tree or a Steiner tree, which fails mysteriously in certain cases.

  11. The crack and wedging problem for an orthotropic strip

    NASA Technical Reports Server (NTRS)

    Cinar, A.; Erdogan, F.

    1983-01-01

    The plane elasticity problem for an orthotropic strip containing a crack parallel to its boundaries is considered. The problem is formulated under general mixed mode loading conditions. The stress intensity factors depend on two dimensionless orthotropic constants only. For the crack problem the results are given for a single crack and two collinear cracks. The calculated results show that of the two orthotropic constants the influence of the stiffness ratio on the stress intensity factors is much more significant than that of the shear parameter. The problem of loading the strip by a rigid rectangular lengths continuous contact is maintained along the wedge strip interface; at a certain critical wedge length the separation starts at the midsection of the wedge, and the length of the separation zone increases rapidly with increasing wedge length. Previously announced in STAR as N82-26707

  12. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1989-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  13. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1987-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  14. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  15. An approach of traffic signal control based on NLRSQP algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  16. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  17. New latent heat storage system with nanoparticles for thermal management of electric vehicles

    NASA Astrophysics Data System (ADS)

    Javani, N.; Dincer, I.; Naterer, G. F.

    2014-12-01

    In this paper, a new passive thermal management system for electric vehicles is developed. A latent heat thermal energy storage with nanoparticles is designed and optimized. A genetic algorithm method is employed to minimize the length of the heat exchanger tubes. The results show that even the optimum length of a shell and tube heat exchanger becomes too large to be employed in a vehicle. This is mainly due to the very low thermal conductivity of phase change material (PCM) which fills the shell side of the heat exchanger. A carbon nanotube (CNT) and PCM mixture is then studied where the probability of nanotubes in a series configuration is defined as a deterministic design parameter. Various heat transfer rates, ranging from 300 W to 600 W, are utilized to optimize battery cooling options in the heat exchanger. The optimization results show that smaller tube diameters minimize the heat exchanger length. Furthermore, finned tubes lead to a higher heat exchanger length due to more heat transfer resistance. By increasing the CNT concentration, the optimum length of the heat exchanger decreases and makes the improved thermal management system a more efficient and competitive with air and liquid thermal management systems.

  18. Gap-minimal systems of notations and the constructible hierarchy

    NASA Technical Reports Server (NTRS)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  19. Multiple Ordinal Regression by Maximizing the Sum of Margins

    PubMed Central

    Hamsici, Onur C.; Martinez, Aleix M.

    2016-01-01

    Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784

  20. Job stress, unwinding and drinking in transit operators.

    PubMed

    Delaney, William P; Grube, Joel W; Greiner, Birgit; Fisher, June M; Ragland, David R

    2002-07-01

    This study tests the spillover model of the effects of work stress on after-work drinking, using the variable "length of time to unwind" as a mediator. A total of 1,974 transit operators were contacted and 1,553 (79%) of them participated in a personal interview. Complete data on the variables in this analysis were available for 1,208 respondents (84% men). Using latent variable structural equation modeling, a model was tested that predicted that daily job problems, skipped meals and less social support from supervisor would increase alcohol consumption through the mediator, length of time to unwind and relax after work. Increased alcohol consumption was, in turn, hypothesized to increase drinking problems. As predicted, skipped meals and daily job problems increased length of time to unwind and had an indirect positive relationship with overall drinking, even when controlling for drinking norms and demographic variables. Overall drinking was positively associated with drinking problems. Supervisor support at work, however, did not significantly influence length of time to unwind. Difficulty unwinding (longer time to unwind) did not have direct effects on drinking problems; however, indirect effects through overall drinking were observed. These results provide preliminary support for the mediating role of length of time to unwind and relax after work in a spillover model of the stress-drinking relationship. This research introduces a new mediator and empirical links between job problems, length of time to unwind, drinking and drinking problems, which ground more substantively the domains of work stress and alcohol consumption.

  1. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  2. Problem of quality assurance during metal constructions welding via robotic technological complexes

    NASA Astrophysics Data System (ADS)

    Fominykh, D. S.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The problem of minimizing the probability for critical combinations of events that lead to a loss in welding quality via robotic process automation is examined. The problem is formulated, models and algorithms for its solution are developed. The problem is solved by minimizing the criterion characterizing the losses caused by defective products. Solving the problem may enhance the quality and accuracy of operations performed and reduce the losses caused by defective product

  3. Incidence of cerebrovascular accidents in patients undergoing minimally invasive valve surgery.

    PubMed

    LaPietra, Angelo; Santana, Orlando; Mihos, Christos G; DeBeer, Steven; Rosen, Gerald P; Lamas, Gervasio A; Lamelas, Joseph

    2014-07-01

    Minimally invasive valve surgery has been associated with increased cerebrovascular complications. Our objective was to evaluate the incidence of cerebrovascular accidents in patients undergoing minimally invasive valve surgery. We retrospectively reviewed all the minimally invasive valve surgery performed at our institution from January 2009 to June 2012. The operative times, lengths of stay, postoperative complications, and mortality were analyzed. A total of 1501 consecutive patients were identified. The mean age was 73 ± 13 years, and 808 patients (54%) were male. Of the 1501 patients, 206 (13.7%) had a history of a cerebrovascular accident, and 225 (15%) had undergone previous heart surgery. The procedures performed were 617 isolated aortic valve replacements (41.1%), 658 isolated mitral valve operations (43.8%), 6 tricuspid valve repairs (0.4%), 216 double valve surgery (14.4%), and 4 triple valve surgery (0.3%). Femoral cannulation was used in 1359 patients (90.5%) and central cannulation in 142 (9.5%). In 1392 patients (92.7%), the aorta was clamped, and in 109 (7.3%), the surgery was performed with the heart fibrillating. The median aortic crossclamp and cardiopulmonary bypass times were 86 minutes (interquartile range [IQR], 70-107) minutes and 116 minutes (IQR, 96-143), respectively. The median intensive care unit length of stay was 47 hours (IQR, 29-74), and the median postoperative hospital length of stay was 7 days (IQR, 5-10). A total of 23 cerebrovascular accidents (1.53%) and 38 deaths (2.53%) had occurred at 30 days postoperatively. Minimally invasive valve surgery was associated with an acceptable stroke rate, regardless of the cannulation technique. Copyright © 2014 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  4. Orthography-Induced Length Contrasts in the Second Language Phonological Systems of L2 Speakers of English: Evidence from Minimal Pairs.

    PubMed

    Bassetti, Bene; Sokolović-Perović, Mirjana; Mairano, Paolo; Cerni, Tania

    2018-06-01

    Research shows that the orthographic forms ("spellings") of second language (L2) words affect speech production in L2 speakers. This study investigated whether English orthographic forms lead L2 speakers to produce English homophonic word pairs as phonological minimal pairs. Targets were 33 orthographic minimal pairs, that is to say homophonic words that would be pronounced as phonological minimal pairs if orthography affects pronunciation. Word pairs contained the same target sound spelled with one letter or two, such as the /n/ in finish and Finnish (both /'fɪnɪʃ/ in Standard British English). To test for effects of length and type of L2 exposure, we compared Italian instructed learners of English, Italian-English late bilinguals with lengthy naturalistic exposure, and English natives. A reading-aloud task revealed that Italian speakers of English L2 produce two English homophonic words as a minimal pair distinguished by different consonant or vowel length, for instance producing the target /'fɪnɪʃ/ with a short [n] or a long [nː] to reflect the number of consonant letters in the spelling of the words finish and Finnish. Similar effects were found on the pronunciation of vowels, for instance in the orthographic pair scene-seen (both /siːn/). Naturalistic exposure did not reduce orthographic effects, as effects were found both in learners and in late bilinguals living in an English-speaking environment. It appears that the orthographic form of L2 words can result in the establishment of a phonological contrast that does not exist in the target language. Results have implications for models of L2 phonological development.

  5. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  6. Shortest multiple disconnected path for the analysis of entanglements in two- and three-dimensional polymeric systems

    NASA Astrophysics Data System (ADS)

    Kröger, Martin

    2005-06-01

    We present an algorithm which returns a shortest path and related number of entanglements for a given configuration of a polymeric system in 2 or 3 dimensions. Rubinstein and Helfand, and later Everaers et al. introduced a concept to extract primitive paths for dense polymeric melts made of linear chains (a multiple disconnected multibead 'path'), where each primitive path is defined as a path connecting the (space-fixed) ends of a polymer under the constraint of non-interpenetration (excluded volume) between primitive paths of different chains, such that the multiple disconnected path fulfills a minimization criterion. The present algorithm uses geometrical operations and provides a—model independent—efficient approximate solution to this challenging problem. Primitive paths are treated as 'infinitely' thin (we further allow for finite thickness to model excluded volume), and tensionless lines rather than multibead chains, excluded volume is taken into account without a force law. The present implementation allows to construct a shortest multiple disconnected path (SP) for 2D systems (polymeric chain within spherical obstacles) and an optimal SP for 3D systems (collection of polymeric chains). The number of entanglements is then simply obtained from the SP as either the number of interior kinks, or from the average length of a line segment. Further, information about structure and potentially also the dynamics of entanglements is immediately available from the SP. We apply the method to study the 'concentration' dependence of the degree of entanglement in phantom chain systems. Program summaryTitle of program:Z Catalogue number:ADVG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVG Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Silicon Graphics (Irix), Sun (Solaris), PC (Linux) Operating systems or monitors under which the program has been tested: UNIX, Linux Program language used: USANSI Fortran 77 and Fortran 90 Memory required to execute with typical data: 1 MByte No. of lines in distributed program, including test data, etc.: 10 660 No. of bytes in distributed program, including test data, etc.: 119 551 Distribution formet:tar.gz Nature of physical problem: The problem is to obtain primitive paths substantiating a shortest multiple disconnected path (SP) for a given polymer configuration (chains of particles, with or without additional single particles as obstacles for the 2D case). Primitive paths are here defined as in [M. Rubinstein, E. Helfand, J. Chem. Phys. 82 (1985) 2477; R. Everaers, S.K. Sukumaran, G.S. Grest, C. Svaneborg, A. Sivasubramanian, K. Kremer, Science 303 (2004) 823] as the shortest line (path) respecting 'topological' constraints (from neighboring polymers or point obstacles) between ends of polymers. There is a unique solution for the 2D case. For the 3D case it is unique if we construct a primitive path of a single chain embedded within fixed line obstacles [J.S.B. Mitchell, Geometric shortest paths and network optimization, in: J.-R. Sack, J. Urrutia (Eds.), Handbook of Computational Geometry, Elsevier, Amsterdam, 2000, pp. 633-701]. For a large 3D configuration made of several chains, short is meant to be the Euclidean shortest multiple disconnected path (SP) where primitive paths are constructed for all chains simultaneously. While the latter problem, in general, does not possess a unique solution, the algorithm must return a locally optimal solution, robust against minor displacements of the disconnected path and chain re-labeling. The problem is solved if the number of kinks (or entanglements Z), explicitly deduced from the SP, is quite insensitive to the exact conformation of the SP which allows to estimate Z with a small error. Efficient method of solution: Primitive paths are constructed from the given polymer configuration (a non-shortest multiple disconnected path, including obstacles, if present) by first replacing each polymer contour by a line with a number of 'kinks' (beads, nodes) and 'segments' (edges). To obtain primitive paths, defined to be uncrossable by any other objects (neighboring primitive paths, line or point obstacles), the algorithm minimizes the length of all primitive paths consecutively, until a final minimum Euclidean length of the SP is reached. Fast geometric operations rather than dynamical methods are used to minimize the contour lengths of the primitive paths. Neighbor lists are used to keep track of potentially intersecting segments of other chains. Periodic boundary conditions are employed. A finite small line thickness is used in order to make sure that entanglements are not 'lost' due to finite precision of representation of numbers. Restrictions on the complexity of the problem: For a single chain embedded within fixed line or point obstacles, the algorithm returns the exact SP. For more complex problems, the algorithm returns a locally optimal SP. Except for exotic, probably rare, configurations it turns out that different locally optimal SPs possess quite an identical number of nodes. In general, the problem constructing the SP is known to be NP-hard [J.S.B. Mitchell, Geometric shortest paths and network optimization, in: J.-R. Sack, J. Urrutia (Eds.), Handbook of Computational Geometry, Elsevier, Amsterdam, 2000, pp. 633-701], and we offer a solution which should suffice to analyze physical problems, and gives an estimate about the precision and uniqueness of the result (from a standard deviation by varying the parameter: cyclicswitch). The program is NOT restricted to handle systems for which segment lengths of the SP exceed half the box size. Typical running time: Typical running times are approximately two orders of magnitude shorter compared with the ones needed for a corresponding molecular dynamics approach, and scale mostly linearly with system size. We provide a benchmark table.

  7. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  8. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  9. The Moderating Role of Genetics: The Effect of Length of Hospitalization on Children’s Internalizing and Externalizing Behaviors

    PubMed Central

    Benish-Weisman, Maya; Kerem, Eitan; Knafo-Noam, Ariel; Belsky, Jay

    2015-01-01

    The study considered individual differences in children’s ability to adjust to hospitalization and found the length of hospitalization to be related to adaptive psychological functioning for some children. Applying the theoretical framework of three competing models of gene-X-environment interactions (diathesis–stress, differential susceptibility, and vantage sensitivity), the study examined the moderating effect of genetics (DRD4) on the relationship between the length of hospitalization and internalizing and externalizing problems. Mothers reported on children’s hospitalization background and conduct problems (externalizing) and emotional symptoms (internalizing), using subscales of the 25-item Strength and Difficulties Questionnaire (1). Data on both hospitalization and genetics were available for 65 children, 57% of whom were females, with an average age of 61.4 months (SD = 2.3). The study found length of hospitalization did not predict emotional and behavior problems per se, but the interaction with genetics was significant; the length of hospitalization was related to diminished levels of internalizing and externalizing problems only for children with the 7R allele (the sensitive variant). The vantage sensitivity model best accounted for how the length of hospitalization and genetics related to children’s internalizing and externalizing problems. PMID:26347661

  10. Comparison of Two Methods for Estimating Adjustable One-Point Cane Length in Community-Dwelling Older Adults.

    PubMed

    Camara, Camila Thais Pinto; de Freitas, Sandra Maria Sbeghen Ferreira; de Lima, Waléria Paixão; Lima, Camila Astolphi; Amorim, César Ferreira; Perracini, Monica Rodrigues

    2017-01-01

    Our aim is to estimate inter-observer reliability, test-retest reliability, anthropometric and biomechanical adequacy and minimal detectable change when measuring the length of single-point adjustable canes in community-dwelling older adults. There are 112 participants in the study. They are men and women, aged 60 years and over, who were attending an outpatient community health centre. An exploratory study design was used. Participants underwent two assessments within the same day by two independent observers and by the same observer at an interval of 15-45 days. Two measures were used to establish the length of a single-point adjustable cane: the distance from the distal wrist crease to the floor (WF) and the distance from the top of the greater trochanter of the femur to the floor (TF). Each individual was fitted according to these two measures, and elbow flexion angle was measured. Inter-observer reliability and the test-retest reliability were high in both TF (ICC 3.1  = 0.918 and ICC 2.1  = 0.935) and WF measures (ICC 3.1  = 0.967 and ICC 2.1  = 0.960). Only 1% of the individuals kept an elbow flexion angle within the standard recommendation of 30° ± 10° when the cane length was determined by the TF measure, and 30% of the participants when the cane was determined by the WF measure. The minimal detectable cane length change was 2.2 cm. Our results suggest that, even though both measures are reliable, cane length determined by WF distance is more appropriate to keep the elbow flexion angle within the standard recommendation. The minimal detectable change corresponds to approximately a hole in the cane adjustment. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  12. Multiobjective GAs, quantitative indices, and pattern classification.

    PubMed

    Bandyopadhyay, Sanghamitra; Pal, Sankar K; Aruna, B

    2004-10-01

    The concept of multiobjective optimization (MOO) has been integrated with variable length chromosomes for the development of a nonparametric genetic classifier which can overcome the problems, like overfitting/overlearning and ignoring smaller classes, as faced by single objective classifiers. The classifier can efficiently approximate any kind of linear and/or nonlinear class boundaries of a data set using an appropriate number of hyperplanes. While designing the classifier the aim is to simultaneously minimize the number of misclassified training points and the number of hyperplanes, and to maximize the product of class wise recognition scores. The concepts of validation set (in addition to training and test sets) and validation functional are introduced in the multiobjective classifier for selecting a solution from a set of nondominated solutions provided by the MOO algorithm. This genetic classifier incorporates elitism and some domain specific constraints in the search process, and is called the CEMOGA-Classifier (constrained elitist multiobjective genetic algorithm based classifier). Two new quantitative indices, namely, the purity and minimal spacing, are developed for evaluating the performance of different MOO techniques. These are used, along with classification accuracy, required number of hyperplanes and the computation time, to compare the CEMOGA-Classifier with other related ones.

  13. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  14. Optimum Design of a Ceramic Tensile Creep Specimen Using a Finite Element Method

    PubMed Central

    Wang, Z.; Chiang, C. K.; Chuang, T.-J.

    1997-01-01

    An optimization procedure for designing a ceramic tensile creep specimen to minimize stress concentration is carried out using a finite element method. The effect of pin loading and the specimen geometry are considered in the stress distribution calculations. A growing contact zone between the pin and the specimen has been incorporated into the problem solution scheme as the load is increased to its full value. The optimization procedures are performed for the specimen, and all design variables including pinhole location and pinhole diameter, head width, neck radius, and gauge length are determined based on a set of constraints imposed on the problem. In addition, for the purpose of assessing the possibility of delayed failure outside the gage section, power-law creep in the tensile specimen is considered in the analysis. Using a particular grade of advanced ceramics as an example, it is found that if the specimen is not designed properly, significant creep deformation and stress redistribution may occur in the head of the specimen resulting in undesirable (delayed) head failure of the specimen during the creep test. PMID:27805126

  15. Optimal placement of water-lubricated rubber bearings for vibration reduction of flexible multistage rotor systems

    NASA Astrophysics Data System (ADS)

    Liu, Shibing; Yang, Bingen

    2017-10-01

    Flexible multistage rotor systems with water-lubricated rubber bearings (WLRBs) have a variety of engineering applications. Filling a technical gap in the literature, this effort proposes a method of optimal bearing placement that minimizes the vibration amplitude of a WLRB-supported flexible rotor system with a minimum number of bearings. In the development, a new model of WLRBs and a distributed transfer function formulation are used to define a mixed continuous-and-discrete optimization problem. To deal with the case of uncertain number of WLRBs in rotor design, a virtual bearing method is devised. Solution of the optimization problem by a real-coded genetic algorithm yields the locations and lengths of water-lubricated rubber bearings, by which the prescribed operational requirements for the rotor system are satisfied. The proposed method is applicable either to preliminary design of a new rotor system with the number of bearings unforeknown or to redesign of an existing rotor system with a given number of bearings. Numerical examples show that the proposed optimal bearing placement is efficient, accurate and versatile in different design cases.

  16. Banach spaces that realize minimal fillings

    NASA Astrophysics Data System (ADS)

    Bednov, B. B.; Borodin, P. A.

    2014-04-01

    It is proved that a real Banach space realizes minimal fillings for all its finite subsets (a shortest network spanning a fixed finite subset always exists and has the minimum possible length) if and only if it is a predual of L_1. The spaces L_1 are characterized in terms of Steiner points (medians). Bibliography: 25 titles.

  17. The detection and stabilisation of limit cycle for deterministic finite automata

    NASA Astrophysics Data System (ADS)

    Han, Xiaoguang; Chen, Zengqiang; Liu, Zhongxin; Zhang, Qing

    2018-04-01

    In this paper, the topological structure properties of deterministic finite automata (DFA), under the framework of the semi-tensor product of matrices, are investigated. First, the dynamics of DFA are converted into a new algebraic form as a discrete-time linear system by means of Boolean algebra. Using this algebraic description, the approach of calculating the limit cycles of different lengths is given. Second, we present two fundamental concepts, namely, domain of attraction of limit cycle and prereachability set. Based on the prereachability set, an explicit solution of calculating domain of attraction of a limit cycle is completely characterised. Third, we define the globally attractive limit cycle, and then the necessary and sufficient condition for verifying whether all state trajectories of a DFA enter a given limit cycle in a finite number of transitions is given. Fourth, the problem of whether a DFA can be stabilised to a limit cycle by the state feedback controller is discussed. Criteria for limit cycle-stabilisation are established. All state feedback controllers which implement the minimal length trajectories from each state to the limit cycle are obtained by using the proposed algorithm. Finally, an illustrative example is presented to show the theoretical results.

  18. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  19. Walkway Length Determination for Steady State Walking in Young and Older Adults

    ERIC Educational Resources Information Center

    Macfarlane, Pamela A.; Looney, Marilyn A.

    2008-01-01

    The primary purpose of this study was to determine acceleration (AC) and deceleration (DC) distances that would accommodate young and older adults walking at their preferred and fast speeds. A secondary purpose was to determine the minimal walkway length needed to record six steady state (SS) steps (three full gait cycles) for younger and older…

  20. Electrophoresis of semiflexible heteropolymers and the ``hydrodynamic Kuhn length''

    NASA Astrophysics Data System (ADS)

    Chubynsky, Mykyta V.; Slater, Gary W.

    Semiflexible polymers, such as DNA, are rodlike for short lengths and coil-like for long lengths. For purely geometric properties, such as the end-to-end distance, the crossover between these two behaviors occurs when the polymer length is on the order of the Kuhn length. On the other hand, for the hydrodynamic friction coefficient it is easy to see by comparing the expressions for a rod and a coil that the crossover should occur at the polymer length, termed by us the hydrodynamic Kuhn length, which is larger than the ordinary Kuhn length by a logarithmic factor that can be quite significant. We show that for the problem of electrophoresis of a heteropolymer consisting of several blocks of (in general) different stiffnesses, both of these length scales can be important depending on the details of the problem.

  1. Measurement and Comparison of Mechanical Properties of Nitinol Stents

    NASA Astrophysics Data System (ADS)

    Hanus, Josef; Zahora, Jiri

    2005-01-01

    The self expandable Nitinol stents or stentgrafts are typically used for miniinvasive treatment of stenosis and aneurysms in the cardiovascular system. The minimal traumatisation of the patient, shorter time of hospitalization are typical advantages of these methods. More than ten years of experience has yielded also important information about the performance of stents in interaction with biological system and the possible problems related with it. The leakage or the shift of stent are some typical disadvantages, that can be related among other in the construction of the stent. The problem is that the mechanical properties, dimensions and the dynamical properties of the stent do not exactly correspond to the properties of the vessel or generally of tissue where this stent is introduced. The measurement, the description and the comparison of the relations between the mechanical properties of stents and tissues can be one of the possible ways to minimize these disadvantages. The developed original computer controlled measuring system allows the measurement of mechanical properties of stents, the measurement of strain-stress curves or simulation of interaction of the stent and vessel for exactly defined hemodynamic conditions. We measured and compared the mechanical parameters of different selfexpandable Nitinol stents, which differed in geometry (radius and length), in the type of construction (number of branches and rising of winding) and in the diameter of used wire. The results of measurements confirmed the theoretical assumptions that just the diameter of the Nitinol wire significantly influences the rigidity and the level of compressibility of the stent as well. A compromise must be found between the required rigidity of the stent and the minimal size of the delivery system. The exact description of the relation between the mechanical properties and geometry and construction of the stents enables to design the stent to fit the patient and it is expected that this access improves the efficiency of treatment. The results of measurement are also necessary for the design and identification of the parameters of the models of the stents.

  2. Genetic similarity between Taenia solium cysticerci collected from the two distant endemic areas in North and North East India.

    PubMed

    Sharma, Monika; Devi, Kangjam Rekha; Sehgal, Rakesh; Narain, Kanwar; Mahanta, Jagadish; Malla, Nancy

    2014-01-01

    Taenia solium taeniasis/cysticercosis is a major public health problem in developing countries. This study reports genotypic analysis of T. solium cysticerci collected from two different endemic areas of North (Chandigarh) and North East India (Dibrugarh) by the sequencing of mitochondrial cytochrome c oxidase subunit 1 (cox1) gene. The variation in cox1 sequences of samples collected from these two different geographical regions located at a distance of 2585 km was minimal. Alignment of the nucleotide sequences with different species of Taenia showed the similarity with Asian genotype of T. solium. Among 50 isolates, 6 variant nucleotide positions (0.37% of total length) were detected. These results suggest that population in these geographical areas are homogenous. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. High Performance Magnetic Bearings for Aero Applications

    NASA Technical Reports Server (NTRS)

    Allaire, P. E.; Knospe, C. R.; Williams, R. D.; Lewis, D. W.; Barrett, L. E.; Maslen, E. H.; Humphris, R. R.

    1997-01-01

    Several previous annual reports were written and numerous papers published on the topics for this grant. That work is not repeated here in this final report. Only the work completed in the final year of the grant is presented in this final report. This final year effort concentrated on power loss measurements in magnetic bearing rotors. The effect of rotor power losses in magnetic bearings are very important for many applications. In some cases, these losses must be minimized to maximize the length of time the rotating machine can operate on a fixed energy or power supply. Examples include aircraft gas turbine engines, space devices, or energy storage flywheels. In other applications, the heating caused by the magnetic bearing must be removed. Excessive heating can be a significant problem in machines as diverse as large compressors, electric motors, textile spindles, and artificial heart pumps.

  4. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    NASA Astrophysics Data System (ADS)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see , articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  5. Wave envelope technique for multimode wave guide problems

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Sudharsanan, S. I.

    1986-01-01

    A fast method for solving wave guide problems is proposed. In particular, the guide is considered to be inhomogeneous allowing propagation of waves of higher order modes. Such problems have been handled successfully for acoustic wave propagation problems with single mode and finite length. This paper extends this concept to electromagnetic wave guides with several modes and infinite length. The method is described and results of computations are presented.

  6. Minimally Invasive Thumb-sized Pterional Craniotomy for Surgical Clip Ligation of Unruptured Anterior Circulation Aneurysms.

    PubMed

    Deshaies, Eric M; Villwock, Mark R; Singla, Amit; Toshkezi, Gentian; Padalino, David J

    2015-08-11

    Less invasive surgical approaches for intracranial aneurysm clipping may reduce length of hospital stay, surgical morbidity, treatment cost, and improve patient outcomes. We present our experience with a minimally invasive pterional approach for anterior circulation aneurysms performed in a major tertiary cerebrovascular center and compare the results with an aged matched dataset from the Nationwide Inpatient Sample (NIS). From August 2008 to December 2012, 22 elective aneurysm clippings on patients ≤55 years of age were performed by the same dual fellowship-trained cerebrovascular/endovascular neurosurgeon. One patient (4.5%) experienced transient post-operative complications. 18 of 22 patients returned for follow-up imaging and there were no recurrences through an average duration of 22 months. A search in the NIS database from 2008 to 2010, also for patients aged ≤55 years of age, yielded 1,341 hospitalizations for surgical clip ligation of unruptured cerebral aneurysms. Inpatient length of stay and hospital charges at our institution using the minimally invasive thumb-sized pterional technique were nearly half that of NIS (length of stay: 3.2 vs 5.7 days; hospital charges: $52,779 vs. $101,882). The minimally invasive thumb-sized pterional craniotomy allows good exposure of unruptured small and medium-sized supraclinoid anterior circulation aneurysms. Cerebrospinal fluid drainage from key subarachnoid cisterns and constant bimanual microsurgical techniques avoid the need for retractors which can cause contusions, localized venous infarctions, and post-operative cerebral edema at the retractor sites. Utilizing this set of techniques has afforded our patients with a shorter hospital stay at a lower cost compared to the national average.

  7. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  8. Route selection by rats and humans in a navigational traveling salesman problem.

    PubMed

    Blaser, Rachel E; Ginchansky, Rachel R

    2012-03-01

    Spatial cognition is typically examined in non-human animals from the perspective of learning and memory. For this reason, spatial tasks are often constrained by the time necessary for training or the capacity of the animal's short-term memory. A spatial task with limited learning and memory demands could allow for more efficient study of some aspects of spatial cognition. The traveling salesman problem (TSP), used to study human visuospatial problem solving, is a simple task with modifiable learning and memory requirements. In the current study, humans and rats were characterized in a navigational version of the TSP. Subjects visited each of 10 baited targets in any sequence from a set starting location. Unlike similar experiments, the roles of learning and memory were purposely minimized; all targets were perceptually available, no distracters were used, and each configuration was tested only once. The task yielded a variety of behavioral measures, including target revisits and omissions, route length, and frequency of transitions between each pair of targets. Both humans and rats consistently chose routes that were more efficient than chance, but less efficient than optimal, and generally less efficient than routes produced by the nearest-neighbor strategy. We conclude that the TSP is a useful and flexible task for the study of spatial cognition in human and non-human animals.

  9. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  10. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  11. libFLASM: a software library for fixed-length approximate string matching.

    PubMed

    Ayad, Lorraine A K; Pissis, Solon P P; Retha, Ahmad

    2016-11-10

    Approximate string matching is the problem of finding all factors of a given text that are at a distance at most k from a given pattern. Fixed-length approximate string matching is the problem of finding all factors of a text of length n that are at a distance at most k from any factor of length ℓ of a pattern of length m. There exist bit-vector techniques to solve the fixed-length approximate string matching problem in time [Formula: see text] and space [Formula: see text] under the edit and Hamming distance models, where w is the size of the computer word; as such these techniques are independent of the distance threshold k or the alphabet size. Fixed-length approximate string matching is a generalisation of approximate string matching and, hence, has numerous direct applications in computational molecular biology and elsewhere. We present and make available libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching under both the edit and the Hamming distance models. Moreover we describe how fixed-length approximate string matching is applied to solve real problems by incorporating libFLASM into established applications for multiple circular sequence alignment as well as single and structured motif extraction. Specifically, we describe how it can be used to improve the accuracy of multiple circular sequence alignment in terms of the inferred likelihood-based phylogenies; and we also describe how it is used to efficiently find motifs in molecular sequences representing regulatory or functional regions. The comparison of the performance of the library to other algorithms show how it is competitive, especially with increasing distance thresholds. Fixed-length approximate string matching is a generalisation of the classic approximate string matching problem. We present libFLASM, a free open-source C++ software library for solving fixed-length approximate string matching. The extensive experimental results presented here suggest that other applications could benefit from using libFLASM, and thus further maintenance and development of libFLASM is desirable.

  12. Association between Hyperglycaemia with Neurological Outcomes Following Severe Head Trauma.

    PubMed

    Khajavikhan, Javaher; Vasigh, Aminolah; Kokhazade, Taleb; Khani, Ali

    2016-04-01

    Head Trauma (HT) is a major cause of death, disability and important public health problem. HT is also the main cause of hyperglycaemia that can increase mortality. The aim of this study was to assess the correlation between hyperglycaemia with neurological outcomes following severe Traumatic Brain Injury (TBI). This is a descriptive and correlation study that was carried out at the Imam Khomeini Hospital affiliated with Ilam University of Medical Sciences, Ilam, IR, during March 2014-March 2015 on patients with severe TBI. Data were collected from the patient records on mortality, Intensive Care Unit (ICU) length of stay, hospital length of stay, admission GCS score, Injury Severity Score (ISS), mechanical ventilation, Ventilation Associated Pneumonia (VAP) and Acute Respiratory Distress Syndrome (ARDS). Random Blood Sugar (RBS) level on admission was recorded. Patients with diabetes mellitus (to minimize the overlap between acute stress hyperglycaemia and diabetic hyperglycaemia) were excluded. About 34(40%) of patients were admitted with hyperglycaemia (RBS ≥ 200 mg/dl) over the study period. The mortality rate, length of ICU stay, hospital stay, ISS and VAP & ARDS in patients with RBS levels ≥ 200 mg was significantly higher than patients with RBS levels below ≤ 200mg (p<0.05, p<0.001). A significant correlation was found between RBS with GCS arrival, length of ICU stay, length of hospital stay, ISS, mechanical ventilation and VAP & ARDS (p<0.05, p< 0.001). RBS is a predicate factor for ISS (p <0.05, OR : 1.36), GCS (p <0.001, OR : 1.69), mechanical ventilation (p< 0.05, OR : 1.27), VAP & ARDS (p <0.001, OR : 1.68), length of ICU stay (p <0.001, OR : 1.87) and length of hospital stay (p <0.05, OR : 1.24). Hyperglycaemia after severe TBI (RBS ≥ 200) is associated with poor outcome. It can be a predictive factor for mortality rate, ICU stay, GCS arrival, VAP & RDS, hospital stay and ISS. Management of hyperglycaemia with insulin protocol in cases with value >200mg/dl, is critical in improving the outcome of patients with TBI.

  13. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  14. Active and passive shielding design optimization and technical solutions for deep sensitivity hard x-ray focusing telescopes

    NASA Astrophysics Data System (ADS)

    Malaguti, G.; Pareschi, G.; Ferrando, P.; Caroli, E.; Di Cocco, G.; Foschini, L.; Basso, S.; Del Sordo, S.; Fiore, F.; Bonati, A.; Lesci, G.; Poulsen, J. M.; Monzani, F.; Stevoli, A.; Negri, B.

    2005-08-01

    The 10-100 keV region of the electromagnetic spectrum contains the potential for a dramatic improvement in our understanding of a number of key problems in high energy astrophysics. A deep inspection of the universe in this band is on the other hand still lacking because of the demanding sensitivity (fraction of μCrab in the 20-40 keV for 1 Ms integration time) and imaging (≈ 15" angular resolution) requirements. The mission ideas currently being proposed are based on long focal length, grazing incidence, multi-layer optics, coupled with focal plane detectors with few hundreds μm spatial resolution capability. The required large focal lengths, ranging between 8 and 50 m, can be realized by means of extendable optical benches (as foreseen e.g. for the HEXITSAT, NEXT and NuSTAR missions) or formation flight scenarios (e.g. Simbol-X and XEUS). While the final telescope design will require a detailed trade-off analysis between all the relevant parameters (focal length, plate scale value, angular resolution, field of view, detector size, and sensitivity degradation due to detector dead area and telescope vignetting), extreme attention must be dedicated to the background minimization. In this respect, key issues are represented by the passive baffling system, which in case of large focal lengths requires particular design assessments, and by the active/passive shielding geometries and materials. In this work, the result of a study of the expected background for a hard X-ray telescope is presented, and its implication on the required sensitivity, together with the possible implementation design concepts for active and passive shielding in the framework of future satellite missions, are discussed.

  15. Consistent cosmology with Higgs thermal inflation in a minimal extension of the MSSM

    NASA Astrophysics Data System (ADS)

    Hindmarsh, Mark; Jones, D. R. Timothy

    2013-03-01

    We consider a class of supersymmetric inflation models, in which minimal gauged F-term hybrid inflation is coupled renormalisably to the minimal supersymmetric standard model (MSSM), with no extra ingredients; we call this class the ``minimal hybrid inflationary supersymmetric standard model'' (MHISSM). The singlet inflaton couples to the Higgs as well as the waterfall fields, supplying the Higgs μ-term. We show how such models can exit inflation to a vacuum characterised by large Higgs vevs, whose vacuum energy is controlled by supersymmetry-breaking. The true ground state is reached after an intervening period of thermal inflation along the Higgs flat direction, which has important consequences for the cosmology of the F-term inflation scenario. The scalar spectral index is reduced, with a value of approximately 0.976 in the case where the inflaton potential is dominated by the 1-loop radiative corrections. The reheat temperature following thermal inflation is about 109 GeV, which solves the gravitino overclosure problem. A Higgs condensate reduces the cosmic string mass per unit length, rendering it compatible with the Cosmic Microwave Background constraints without tuning the inflaton coupling. With the minimal U(1)' gauge symmetry in the inflation sector, where one of the waterfall fields generates a right-handed neutrino mass, we investigate the Higgs thermal inflation scenario in three popular supersymmetry-breaking schemes: AMSB, GMSB and the CMSSM, focusing on the implications for the gravitino bound. In AMSB enough gravitinos can be produced to account for the observed dark matter abundance through decays into neutralinos. In GMSB we find an upper bound on the gravitino mass of about a TeV, while in the CMSSM the thermally generated gravitinos are sub-dominant. When Big Bang Nucleosynthesis constraints are taken into account, the unstable gravitinos of AMSB and the CMSSM must have a mass O(10) TeV or greater, while in GMSB we find an upper bound on the gravitino mass of O(1) TeV.

  16. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  17. Development of sinkholes resulting from man's activities in the Eastern United States

    USGS Publications Warehouse

    Newton, John G.

    1987-01-01

    Alternatives that allow avoiding or minimizing sinkhole hazards are most numerous when a problem or potential problem is recognized during site evaluation. The number of alternatives declines after the beginning of site development. Where sinkhole development is predictable, zoning of land use can minimize hazards.

  18. Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture

    NASA Astrophysics Data System (ADS)

    Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan

    2015-09-01

    This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.

  19. Sorting signed permutations by short operations.

    PubMed

    Galvão, Gustavo Rodrigues; Lee, Orlando; Dias, Zanoni

    2015-01-01

    During evolution, global mutations may alter the order and the orientation of the genes in a genome. Such mutations are referred to as rearrangement events, or simply operations. In unichromosomal genomes, the most common operations are reversals, which are responsible for reversing the order and orientation of a sequence of genes, and transpositions, which are responsible for switching the location of two contiguous portions of a genome. The problem of computing the minimum sequence of operations that transforms one genome into another - which is equivalent to the problem of sorting a permutation into the identity permutation - is a well-studied problem that finds application in comparative genomics. There are a number of works concerning this problem in the literature, but they generally do not take into account the length of the operations (i.e. the number of genes affected by the operations). Since it has been observed that short operations are prevalent in the evolution of some species, algorithms that efficiently solve this problem in the special case of short operations are of interest. In this paper, we investigate the problem of sorting a signed permutation by short operations. More precisely, we study four flavors of this problem: (i) the problem of sorting a signed permutation by reversals of length at most 2; (ii) the problem of sorting a signed permutation by reversals of length at most 3; (iii) the problem of sorting a signed permutation by reversals and transpositions of length at most 2; and (iv) the problem of sorting a signed permutation by reversals and transpositions of length at most 3. We present polynomial-time solutions for problems (i) and (iii), a 5-approximation for problem (ii), and a 3-approximation for problem (iv). Moreover, we show that the expected approximation ratio of the 5-approximation algorithm is not greater than 3 for random signed permutations with more than 12 elements. Finally, we present experimental results that show that the approximation ratios of the approximation algorithms cannot be smaller than 3. In particular, this means that the approximation ratio of the 3-approximation algorithm is tight.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  1. Does finite-temperature decoding deliver better optima for noisy Hamiltonians?

    NASA Astrophysics Data System (ADS)

    Ochoa, Andrew J.; Nishimura, Kohji; Nishimori, Hidetoshi; Katzgraber, Helmut G.

    The minimization of an Ising spin-glass Hamiltonian is an NP-hard problem. Because many problems across disciplines can be mapped onto this class of Hamiltonian, novel efficient computing techniques are highly sought after. The recent development of quantum annealing machines promises to minimize these difficult problems more efficiently. However, the inherent noise found in these analog devices makes the minimization procedure difficult. While the machine might be working correctly, it might be minimizing a different Hamiltonian due to the inherent noise. This means that, in general, the ground-state configuration that correctly minimizes a noisy Hamiltonian might not minimize the noise-less Hamiltonian. Inspired by rigorous results that the energy of the noise-less ground-state configuration is equal to the expectation value of the energy of the noisy Hamiltonian at the (nonzero) Nishimori temperature [J. Phys. Soc. Jpn., 62, 40132930 (1993)], we numerically study the decoding probability of the original noise-less ground state with noisy Hamiltonians in two space dimensions, as well as the D-Wave Inc. Chimera topology. Our results suggest that thermal fluctuations might be beneficial during the optimization process in analog quantum annealing machines.

  2. Entropy of a (1+1)-dimensional charged black hole to all orders in the Planck length

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Wan; Park, Young-Jai

    2013-02-01

    We study the statistical entropy of a scalar field on the (1+1)-dimensional Maxwell-dilaton background without an artificial cutoff by considering corrections to all orders in the Planck length obtained from a generalized uncertainty principle applied to the quantum state density. In contrast to the previous results for d ≥ 3 dimensional cases, we obtain an unadjustable entropy due to the independence of the minimal length, which plays the role of an adjustable parameter. However, this entropy is still proportional to the Bekenstein-Hawking entropy.

  3. Extended length microchannels for high density high throughput electrophoresis systems

    DOEpatents

    Davidson, James C.; Balch, Joseph W.

    2000-01-01

    High throughput electrophoresis systems which provide extended well-to-read distances on smaller substrates, thus compacting the overall systems. The electrophoresis systems utilize a high density array of microchannels for electrophoresis analysis with extended read lengths. The microchannel geometry can be used individually or in conjunction to increase the effective length of a separation channel while minimally impacting the packing density of channels. One embodiment uses sinusoidal microchannels, while another embodiment uses plural microchannels interconnected by a via. The extended channel systems can be applied to virtually any type of channel confined chromatography.

  4. Cut-To-Length Harvesting of Short Rotation Eucalyptus at Simpson Tehama Fiber Farm

    Treesearch

    Bruce R. Hartsough; David J. Cooper

    1999-01-01

    A system consisting of a cut-to-length harvester, forwarder, mobile chipper and chip screen was tested in a 7-year-old plantation. Three levels of debarking effort by the harvester (minimal, partial and full), and two levels of screening (with and without) were evaluated. The harvester had the lowest production rate and highest cost of the system elements. Harvester...

  5. Flagellar Synchronization Is a Simple Alternative to Cell Cycle Synchronization for Ciliary and Flagellar Studies

    PubMed Central

    Dutta, Soumita

    2017-01-01

    ABSTRACT The unicellular green alga Chlamydomonas reinhardtii is an ideal model organism for studies of ciliary function and assembly. In assays for biological and biochemical effects of various factors on flagellar structure and function, synchronous culture is advantageous for minimizing variability. Here, we have characterized a method in which 100% synchronization is achieved with respect to flagellar length but not with respect to the cell cycle. The method requires inducing flagellar regeneration by amputation of the entire cell population and limiting regeneration time. This results in a maximally homogeneous distribution of flagellar lengths at 3 h postamputation. We found that time-limiting new protein synthesis during flagellar synchronization limits variability in the unassembled pool of limiting flagellar protein and variability in flagellar length without affecting the range of cell volumes. We also found that long- and short-flagella mutants that regenerate normally require longer and shorter synchronization times, respectively. By minimizing flagellar length variability using a simple method requiring only hours and no changes in media, flagellar synchronization facilitates the detection of small changes in flagellar length resulting from both chemical and genetic perturbations in Chlamydomonas. This method increases our ability to probe the basic biology of ciliary size regulation and related disease etiologies. IMPORTANCE Cilia and flagella are highly conserved antenna-like organelles that found in nearly all mammalian cell types. They perform sensory and motile functions contributing to numerous physiological and developmental processes. Defects in their assembly and function are implicated in a wide range of human diseases ranging from retinal degeneration to cancer. Chlamydomonas reinhardtii is an algal model system for studying mammalian cilium formation and function. Here, we report a simple synchronization method that allows detection of small changes in ciliary length by minimizing variability in the population. We find that this method alters the key relationship between cell size and the amount of protein accumulated for flagellar growth. This provides a rapid alternative to traditional methods of cell synchronization for uncovering novel regulators of cilia. PMID:28289724

  6. Flagellar Synchronization Is a Simple Alternative to Cell Cycle Synchronization for Ciliary and Flagellar Studies.

    PubMed

    Dutta, Soumita; Avasthi, Prachee

    2017-01-01

    The unicellular green alga Chlamydomonas reinhardtii is an ideal model organism for studies of ciliary function and assembly. In assays for biological and biochemical effects of various factors on flagellar structure and function, synchronous culture is advantageous for minimizing variability. Here, we have characterized a method in which 100% synchronization is achieved with respect to flagellar length but not with respect to the cell cycle. The method requires inducing flagellar regeneration by amputation of the entire cell population and limiting regeneration time. This results in a maximally homogeneous distribution of flagellar lengths at 3 h postamputation. We found that time-limiting new protein synthesis during flagellar synchronization limits variability in the unassembled pool of limiting flagellar protein and variability in flagellar length without affecting the range of cell volumes. We also found that long- and short-flagella mutants that regenerate normally require longer and shorter synchronization times, respectively. By minimizing flagellar length variability using a simple method requiring only hours and no changes in media, flagellar synchronization facilitates the detection of small changes in flagellar length resulting from both chemical and genetic perturbations in Chlamydomonas . This method increases our ability to probe the basic biology of ciliary size regulation and related disease etiologies. IMPORTANCE Cilia and flagella are highly conserved antenna-like organelles that found in nearly all mammalian cell types. They perform sensory and motile functions contributing to numerous physiological and developmental processes. Defects in their assembly and function are implicated in a wide range of human diseases ranging from retinal degeneration to cancer. Chlamydomonas reinhardtii is an algal model system for studying mammalian cilium formation and function. Here, we report a simple synchronization method that allows detection of small changes in ciliary length by minimizing variability in the population. We find that this method alters the key relationship between cell size and the amount of protein accumulated for flagellar growth. This provides a rapid alternative to traditional methods of cell synchronization for uncovering novel regulators of cilia.

  7. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  8. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin

    2016-12-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.

  9. Stochastic derivative-free optimization using a trust region framework

    DOE PAGES

    Larson, Jeffrey; Billups, Stephen C.

    2016-02-17

    This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less

  10. Betting Decision Under Break-Streak Pattern: Evidence from Casino Gaming.

    PubMed

    Fong, Lawrence Hoc Nang; So, Amy Siu Ian; Law, Rob

    2016-03-01

    Cognitive bias is prevalent among gamblers, especially those with gambling problems. Grounded in the heuristics theories, this study contributes to the literature by examining a cognitive bias triggered by the break streak pattern in the casino setting. We postulate that gamblers tend to bet on the latest outcome when there is a break-streak pattern. Moreover, three determinants of the betting decision under break-streak pattern, including the streak length of the alternative outcome, the frequency of the latest outcome, and gender, were identified and examined in this study. A non-participatory observational study was conducted among the Cussec gamblers in a casino in Macao. An analysis of 1229 bets confirms our postulation, particularly when the streak of the alternative outcome is long, the latest outcome is frequent, and the gamblers are females. The findings provide meaningful implications for casino management and public policymakers regarding the minimization of gambling harm.

  11. Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series

    NASA Technical Reports Server (NTRS)

    Vautard, R.; Ghil, M.

    1989-01-01

    Two dimensions of a dynamical system given by experimental time series are distinguished. Statistical dimension gives a theoretical upper bound for the minimal number of degrees of freedom required to describe the attractor up to the accuracy of the data, taking into account sampling and noise problems. The dynamical dimension is the intrinsic dimension of the attractor and does not depend on the quality of the data. Singular Spectrum Analysis (SSA) provides estimates of the statistical dimension. SSA also describes the main physical phenomena reflected by the data. It gives adaptive spectral filters associated with the dominant oscillations of the system and clarifies the noise characteristics of the data. SSA is applied to four paleoclimatic records. The principal climatic oscillations and the regime changes in their amplitude are detected. About 10 degrees of freedom are statistically significant in the data. Large noise and insufficient sample length do not allow reliable estimates of the dynamical dimension.

  12. Automatic Summarization as a Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.

  13. Implications of Minimizing Trauma During Conventional Cochlear Implantation

    PubMed Central

    Carlson, Matthew L.; Driscoll, Colin L. W.; Gifford, René H.; Service, Geoffrey J.; Tombers, Nicole M.; Hughes-Borst, Becky J.; Neff, Brian A.; Beatty, Charles W.

    2014-01-01

    Objective To describe the relationship between implantation-associated trauma and postoperative speech perception scores among adult and pediatric patients undergoing cochlear implantation using conventional length electrodes and minimally traumatic surgical techniques. Study Design Retrospective chart review (2002–2010). Setting Tertiary academic referral center. Patients All subjects with significant preoperative low-frequency hearing (≤70 dB HL at 250 Hz) who underwent cochlear implantation with a newer generation implant electrode (Nucleus Contour Advance, Advanced Bionics HR90K [1J and Helix], and Med El Sonata standard H array) were reviewed. Intervention(s) Preimplant and postimplant audiometric thresholds and speech recognition scores were recorded using the electronic medical record. Main Outcome Measure(s) Postimplantation pure tone threshold shifts were used as a surrogate measure for extent of intracochlear injury and correlated with postoperative speech perception scores. Results Between 2002 and 2010, 703 cochlear implant (CI) operations were performed. Data from 126 implants were included in the analysis. The mean preoperative low-frequency pure-tone average was 55.4 dB HL. Hearing preservation was observed in 55% of patients. Patients with hearing preservation were found to have significantly higher postoperative speech perception performance in the cochlear implantation-only condition than those who lost all residual hearing. Conclusion Conservation of acoustic hearing after conventional length cochlear implantation is unpredictable but remains a realistic goal. The combination of improved technology and refined surgical technique may allow for conservation of some residual hearing in more than 50% of patients. Germane to the conventional length CI recipient with substantial hearing loss, minimizing trauma allows for improved speech perception in the electric condition. These findings support the use of minimally traumatic techniques in all CI recipients, even those destined for electric-only stimulation. PMID:21659922

  14. Minimally Invasive Thumb-sized Pterional Craniotomy for Surgical Clip Ligation of Unruptured Anterior Circulation Aneurysms

    PubMed Central

    Deshaies, Eric M; Villwock, Mark R; Singla, Amit; Toshkezi, Gentian; Padalino, David J

    2015-01-01

    Less invasive surgical approaches for intracranial aneurysm clipping may reduce length of hospital stay, surgical morbidity, treatment cost, and improve patient outcomes. We present our experience with a minimally invasive pterional approach for anterior circulation aneurysms performed in a major tertiary cerebrovascular center and compare the results with an aged matched dataset from the Nationwide Inpatient Sample (NIS). From August 2008 to December 2012, 22 elective aneurysm clippings on patients ≤55 years of age were performed by the same dual fellowship-trained cerebrovascular/endovascular neurosurgeon. One patient (4.5%) experienced transient post-operative complications. 18 of 22 patients returned for follow-up imaging and there were no recurrences through an average duration of 22 months. A search in the NIS database from 2008 to 2010, also for patients aged ≤55 years of age, yielded 1,341 hospitalizations for surgical clip ligation of unruptured cerebral aneurysms. Inpatient length of stay and hospital charges at our institution using the minimally invasive thumb-sized pterional technique were nearly half that of NIS (length of stay: 3.2 vs 5.7 days; hospital charges: $52,779 vs. $101,882). The minimally invasive thumb-sized pterional craniotomy allows good exposure of unruptured small and medium-sized supraclinoid anterior circulation aneurysms. Cerebrospinal fluid drainage from key subarachnoid cisterns and constant bimanual microsurgical techniques avoid the need for retractors which can cause contusions, localized venous infarctions, and post-operative cerebral edema at the retractor sites. Utilizing this set of techniques has afforded our patients with a shorter hospital stay at a lower cost compared to the national average. PMID:26325337

  15. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  16. Does Day of Surgery Affect Hospital Length of Stay and Charges Following Minimally Invasive Transforaminal Lumbar Interbody Fusion?

    PubMed

    Hijji, Fady Y; Narain, Ankur S; Haws, Brittany E; Khechen, Benjamin; Kudaravalli, Krishna T; Yom, Kelly H; Singh, Kern

    2018-06-01

    Retrospective Cohort. To determine if an association exists between surgery day and length of stay or hospital costs after minimally invasive transforaminal lumbar interbody fusion (MIS TLIF). Length of inpatient stay after orthopedic procedures has been identified as a primary cost driver, and previous research has focused on determining risk factors for prolonged length of stay. In the arthroplasty literature, surgery performed later in the week has been identified as a predictor of increased length of stay. However, no such investigation has been performed for MIS TLIF. A surgical registry of patients undergoing MIS TLIF between 2008 and 2016 was retrospectively reviewed. Patients were grouped based on day of surgery, with groups including early surgery and late surgery. Day of surgery group was tested for an association with demographics and perioperative variables using the student t test or χ analysis. Day of surgery group was then tested for an association with direct hospital costs using multivariate linear regression. In total, 438 patients were analyzed. In total, 51.8% were in the early surgery group, and 48.2% were in the late surgery group. There were no differences in demographics between groups. There were no differences between groups with regard to operative time, intraoperative blood loss, length of stay, or discharge day. Finally, there were no differences in total hospital charges between early and late surgery groups (P=0.247). The specific day on which a MIS TLIF procedure occurs is not associated with differences in length of inpatient stay or total hospital costs. This suggests that the postoperative course after MIS TLIF procedures is not affected by the differences in hospital staffing that occurs on the weekend compared with weekdays.

  17. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  18. Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.

    PubMed

    Wang, Changlong; Peng, Jigen

    2018-01-01

    The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.

  19. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  20. Indexing a sequence for mapping reads with a single mismatch.

    PubMed

    Crochemore, Maxime; Langiu, Alessio; Rahman, M Sohel

    2014-05-28

    Mapping reads against a genome sequence is an interesting and useful problem in computational molecular biology and bioinformatics. In this paper, we focus on the problem of indexing a sequence for mapping reads with a single mismatch. We first focus on a simpler problem where the length of the pattern is given beforehand during the data structure construction. This version of the problem is interesting in its own right in the context of the next generation sequencing. In the sequel, we show how to solve the more general problem. In both cases, our algorithm can construct an efficient data structure in O(n log(1+ε) n) time and space and can answer subsequent queries in O(m log log n + K) time. Here, n is the length of the sequence, m is the length of the read, 0<ε<1 and is the optimal output size.

  1. Scheduling with non-decreasing deterioration jobs and variable maintenance activities on a single machine

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia

    2017-01-01

    There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.

  2. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  3. Enucleation of facial sebaceous cyst by creating a minimal elliptical incision through a keratin-filled orifice.

    PubMed

    Chen, Wei-Liang

    2016-12-01

    A facial sebaceous cyst is a common benign epithelial tumor and surgical excision is frequently performed but may cause obvious scarring and may be esthetically troubling. This study evaluated the clinical outcomes of the patients with facial sebaceous cyst enucleated by creating minimal elliptical incisions through a keratin-filled orifice. Eleven patients with facial sebaceous cyst enucleated by creating minimal elliptical incisions through a keratin-filled orifice. We treated nine male and two female patients aged 25-52 years. The mean cyst size was 1.85 × 1.56 cm. All cysts were successfully enucleated. The mean wound length was 0.93 cm. The mean operative time was 15.2 min. The mean follow-up duration was 41.5 months. No recurrence was noted, and all patients were very satisfied with their esthetic outcomes. All cysts were successfully enucleated. The mean elliptical wound length was 0.93 cm (range, 0.8-1.1 cm). The mean operative time was 15.2 min. We found no evidence of wound infection, or nerve or vascular injury. Enucleation of facial sebaceous cyst via a minimal elliptical incision through the keratin-filled orifice was associated with high-level patient satisfaction, and the method is safe and useful for treating facial epidermoid cysts. © 2016 Wiley Periodicals, Inc.

  4. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  5. Minimizing the Diameter of a Network Using Shortcut Edges

    NASA Astrophysics Data System (ADS)

    Demaine, Erik D.; Zadimoghaddam, Morteza

    We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.

  6. Tailoring Multicomponent Writing Interventions: Effects of Coupling Self-Regulation and Transcription Training.

    PubMed

    Limpo, Teresa; Alves, Rui A

    Writing proficiency is heavily based on acquisition and development of self-regulation and transcription skills. The present study examined the effects of combining transcription training with a self-regulation intervention (self-regulated strategy development [SRSD]) in Grade 2 (ages 7-8). Forty-three students receiving self-regulation plus transcription (SRSD+TR) intervention were compared with 37 students receiving a self-regulation only (SRSD only) intervention and 39 students receiving the standard language arts curriculum. Compared with control instruction, SRSD instruction-with or without transcription training-resulted in more complex plans; longer, better, and more complete stories; and the effects transferred to story written recall. Transcription training produced an incremental effect on students' composing skills. In particular, the SRSD+TR intervention increased handwriting fluency, spelling accuracy for inconsistent words, planning and story completeness, writing fluency, clause length, and burst length. Compared with the SRSD-only intervention, the SRSD+TR intervention was particularly effective in raising the writing quality of poorer writers. This pattern of findings suggests that students benefit from writing instruction coupling self-regulation and transcription training from very early on. This seems to be a promising instructional approach not only to ameliorate all students' writing ability and prevent future writing problems but also to minimize struggling writers' difficulties and support them in mastering writing.

  7. Characterization of compressed earth blocks using low frequency guided acoustic waves.

    PubMed

    Ben Mansour, Mohamed; Ogam, Erick; Fellah, Z E A; Soukaina Cherif, Amel; Jelidi, Ahmed; Ben Jabrallah, Sadok

    2016-05-01

    The objective of this work was to analyze the influence of compaction pressure on the intrinsic acoustic parameters (porosity, tortuosity, air-flow resistivity, viscous, and thermal characteristic lengths) of compressed earth blocks through their identification by solving an inverse acoustic wave transmission problem. A low frequency acoustic pipe (60-6000 Hz of length 22 m, internal diameter 3.4 cm) was used for the experimental characterization of the samples. The parameters were identified by the minimization of the difference between the transmissions coefficients data obtained in the pipe with that from an analytical interaction model in which the compressed earth blocks were considered as having rigid frames. The viscous and thermal effects in the pores were accounted for by employing the Johnson-Champoux-Allard-Lafarge model. The results obtained by inversion for high-density compressed earth blocks showed some discordance between the model and experiment especially for the high frequency limit of the acoustic characteristics studied. This was as a consequence of applying high compaction pressure rendering them very highly resistive therefore degrading the signal-to-noise ratios of the transmitted waves. The results showed that the airflow resistivity was very sensitive to the degree of the applied compaction pressure used to form the blocks.

  8. Using Excel To Study The Relation Between Protein Dihedral Angle Omega And Backbone Length

    NASA Astrophysics Data System (ADS)

    Shew, Christopher; Evans, Samari; Tao, Xiuping

    How to involve the uninitiated undergraduate students in computational biophysics research? We made use of Microsoft Excel to carry out calculations of bond lengths, bond angles and dihedral angles of proteins. Specifically, we studied protein backbone dihedral angle omega by examining how its distribution varies with the length of the backbone length. It turns out Excel is a respectable tool for this task. An ordinary current-day desktop or laptop can handle the calculations for midsized proteins in just seconds. Care has to be taken to enter the formulas for the spreadsheet column after column to minimize the computing load. Supported in part by NSF Grant #1238795.

  9. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  10. Associations between social vulnerabilities and psychosocial problems in European children. Results from the IDEFICS study.

    PubMed

    Iguacel, Isabel; Michels, Nathalie; Fernández-Alvira, Juan M; Bammann, Karin; De Henauw, Stefaan; Felső, Regina; Gwozdz, Wencke; Hunsberger, Monica; Reisch, Lucia; Russo, Paola; Tornaritis, Michael; Thumann, Barbara Franziska; Veidebaum, Toomas; Börnhorst, Claudia; Moreno, Luis A

    2017-09-01

    The effect of socioeconomic inequalities on children's mental health remains unclear. This study aims to explore the cross-sectional and longitudinal associations between social vulnerabilities and psychosocial problems, and the association between accumulation of vulnerabilities and psychosocial problems. 5987 children aged 2-9 years from eight European countries were assessed at baseline and 2-year follow-up. Two different instruments were employed to assess children's psychosocial problems: the KINDL (Questionnaire for Measuring Health-Related Quality of Life in Children and Adolescents) was used to evaluate children's well-being and the Strengths and Difficulties Questionnaire (SDQ) was used to evaluate children's internalising problems. Vulnerable groups were defined as follows: children whose parents had minimal social networks, children from non-traditional families, children of migrant origin or children with unemployed parents. Logistic mixed-effects models were used to assess the associations between social vulnerabilities and psychosocial problems. After adjusting for classical socioeconomic and lifestyle indicators, children whose parents had minimal social networks were at greater risk of presenting internalising problems at baseline and follow-up (OR 1.53, 99% CI 1.11-2.11). The highest risk for psychosocial problems was found in children whose status changed from traditional families at T0 to non-traditional families at T1 (OR 1.60, 99% CI 1.07-2.39) and whose parents had minimal social networks at both time points (OR 1.97, 99% CI 1.26-3.08). Children with one or more vulnerabilities accumulated were at a higher risk of developing psychosocial problems at baseline and follow-up. Therefore, policy makers should implement measures to strengthen the social support for parents with a minimal social network.

  11. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  12. Innovative trident fixation technique for allograft knee arthrodesis for high-grade osteosarcoma around the knee.

    PubMed

    Su, Alvin W; Chen, Wei-Ming; Chen, Cheng-Fong; Chen, Tain-Hsiung

    2009-11-01

    Reconstruction for osteosarcoma around the knee after wide resection faces the challenge of great bone defect and future limb length discrepancy in the skeletally immature patients. Modern prosthetic reconstruction may provide good results, but the longevity may be of concern and may not be affordable in certain communities. Allograft knee arthrodesis still has its role in light of bone stock preservation and cost-effectiveness. We developed the innovative trident fixation technique utilizing three Steinmann pins to minimize limb length inequality without jeopardizing knee fusion stability. Twelve patients were enrolled. The mean age was 11.5 (10-13) years. Two had high-grade osteosarcoma in proximal tibia and others in distal femur. Two patients died of oncological disease. The median follow-up of the disease-free 10 patients was 47 (41-60) months. All allograft-host bone junctions healed uneventfully without major complications except one allograft fracture. The average limb length discrepancy was 1.45 (1.0-2.1) cm at latest follow-up. This straightforward technique was successful in knee arthrodesis with minimized limb length inequality. Accordingly, in light of bone stock preservation and longevity for the young children, it may be a surgical alternative for malignant bone tumors around the knee.

  13. String tightening as a self-organizing phenomenon.

    PubMed

    Banerjee, Bonny

    2007-09-01

    The phenomenon of self-organization has been of special interest to the neural network community throughout the last couple of decades. In this paper, we study a variant of the self-organizing map (SOM) that models the phenomenon of self-organization of the particles forming a string when the string is tightened from one or both of its ends. The proposed variant, called the string tightening self-organizing neural network (STON), can be used to solve certain practical problems, such as computation of shortest homotopic paths, smoothing paths to avoid sharp turns, computation of convex hull, etc. These problems are of considerable interest in computational geometry, robotics path-planning, artificial intelligence (AI) (diagrammatic reasoning), very large scale integration (VLSI) routing, and geographical information systems. Given a set of obstacles and a string with two fixed terminal points in a 2-D space, the STON model continuously tightens the given string until the unique shortest configuration in terms of the Euclidean metric is reached. The STON minimizes the total length of a string on convergence by dynamically creating and selecting feature vectors in a competitive manner. Proof of correctness of this anytime algorithm and experimental results obtained by its deployment have been presented in the paper.

  14. Perception of English palatal codas by Korean speakers of English

    NASA Astrophysics Data System (ADS)

    Yeon, Sang-Hee

    2003-04-01

    This study aimed at looking at perception of English palatal codas by Korean speakers of English to determine if perception problems are the source of production problems. In particular, first, this study looked at the possible first language effect on the perception of English palatal codas. Second, a possible perceptual source of vowel epenthesis after English palatal codas was investigated. In addition, individual factors, such as length of residence, TOEFL score, gender and academic status, were compared to determine if those affected the varying degree of the perception accuracy. Eleven adult Korean speakers of English as well as three native speakers of English participated in the study. Three sets of a perception test including identification of minimally different English pseudo- or real words were carried out. The results showed that, first, the Korean speakers perceived the English codas significantly worse than the Americans. Second, the study supported the idea that Koreans perceived an extra /i/ after the final affricates due to final release. Finally, none of the individual factors explained the varying degree of the perceptional accuracy. In particular, TOEFL scores and the perception test scores did not have any statistically significant association.

  15. How to pass information and deliver energy to a network of implantable devices within the human body.

    PubMed

    Sun, Mingui; Hackworth, Steven A; Tang, Zhide; Gilbert, Gary; Cardin, Sylvain; Sclabassi, Robert J

    2007-01-01

    It has been envisioned that a body network can be built to collect data from, and transport information to, implanted miniature devices at multiple sites within the human body. Currently, two problems of utmost importance remain unsolved: 1) how to link information between a pair of implants at a distance? and 2) how to provide electric power to these implants allowing them to function and communicate? In this paper, we present new solutions to these problems by minimizing the intra-body communication distances. We show that, based on a study of human anatomy, the maximum distance from the body surface to the deepest point inside the body is approximately 15 cm. This finding provides an upper bound for the lengths of communication pathways required to reach the body's interior. We also show that these pathways do not have to cross any joins within the body. In order to implement the envisioned body network, we present the design of a new device, called an energy pad. This small-size, light-weight device can easily interface with the skin to perform data communication with, and supply power to, miniature implants.

  16. [TECHNIQUES IN MITRAL VALVE REPAIR VIA A MINIMALLY INVASIVE APPROACH].

    PubMed

    Ito, Toshiaki

    2016-03-01

    In mitral valve repair via a minimally invasive approach, resection of the leaflet is technically demanding compared with that in the standard approach. For resection and suture repair of the posterior leaflet, premarking of incision lines is recommended for precise resection. As an alternative to resection and suture, the leaflet-folding technique is also recommended. For correction of prolapse of the anterior leaflet, neochordae placement with the loop technique is easy to perform. Premeasurement with transesophageal echocardiography or intraoperative measurement using a replica of artificial chordae is useful to determine the appropriate length of the loops. Fine-tuning of the length of neochordae is possible by adding a secondary fixation point on the leaflet if the loop is too long. If the loop is too short, a CV5 Gore-Tex suture can be passed through the loop and loosely tied several times to stack the knots, with subsequent fixation to the edge of the leaflet. Finally, skill in the mitral valve replacement technique is necessary as a back-up for surgeons who perform minimally invasive mitral valve repair.

  17. Focal length hysteresis of a double-liquid lens based on electrowetting

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Wang, Dazhen; Hu, Zhiwei; Chen, Jiabi; Zhuang, Songlin

    2013-02-01

    In this paper, an extended Young equation especially suited for an ideal cylindrical double-liquid variable-focus lens is derived by means of an energy minimization method. Based on the extended Young equation, a kind of focal length hysteresis effect is introduced into the double-liquid variable-focus lens. Such an effect can be explained theoretically by adding a force of friction to the tri-phase contact line. Theoretical analysis shows that the focal length at a particular voltage can be different depending on whether the applied voltage is increasing or decreasing, that is, there is a focal length hysteresis effect. Moreover, the focal length at a particular voltage must be larger when the voltage is rising than when it is dropping. These conclusions are also verified by experiments.

  18. Early and late outcomes of 1000 minimally invasive aortic valve operations.

    PubMed

    Tabata, Minoru; Umakanthan, Ramanan; Cohn, Lawrence H; Bolman, Ralph Morton; Shekar, Prem S; Chen, Frederick Y; Couper, Gregory S; Aranki, Sary F

    2008-04-01

    Minimal access cardiac valve surgery is increasingly utilized. We report our 11-year experience with minimally invasive aortic valve surgery. From 07/96 to 12/06, 1005 patients underwent minimally invasive aortic valve surgery. Early and late outcomes were analyzed. Median patient age was 68 years (range: 24-95), 179 patients (18%) were 80 years or older, 130 patients (13%) had reoperative aortic valve surgery, 86 (8.4%) had aortic root replacement, 62 (6.1%) had concomitant ascending aortic replacement, and 26 (2.6%) had percutaneous coronary intervention on the day of surgery (hybrid procedure). Operative mortality was 1.9% (19/1005). The incidences of deep sternal wound infection, pneumonia and reoperation for bleeding were 0.5% (5/1005), 1.3% (13/1005) and 2.4% (25/1005), respectively. Median length of stay was 6 days and 733 patients (72%) were discharged home. Actuarial survival was 91% at 5 years and 88% at 10 years. In the subgroup of the elderly (> or =80 years), operative mortality was 1.7% (3/179), median length of stay was 8 days and 66 patients (37%) were discharged home. Actuarial survival at 5 years was 84%. There was a significant decreasing trend in cardiopulmonary bypass time, the incidence of bleeding, and operative mortality over time. Minimal access approaches in aortic valve surgery are safe and feasible with excellent outcomes. Aortic root replacement, ascending aortic replacement, and reoperative surgery can be performed with these approaches. These procedures are particularly well-tolerated in the elderly.

  19. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  20. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  1. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  2. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  3. Energy spread minimization in a cascaded laser wakefield accelerator via velocity bunching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhijun; Li, Wentao; Wang, Wentao

    2016-05-15

    We propose a scheme to minimize the energy spread of an electron beam (e-beam) in a cascaded laser wakefield accelerator to the one-thousandth-level by inserting a stage to compress its longitudinal spatial distribution. In this scheme, three-segment plasma stages are designed for electron injection, e-beam length compression, and e-beam acceleration, respectively. The trapped e-beam in the injection stage is transferred to the zero-phase region at the center of one wakefield period in the compression stage where the length of the e-beam can be greatly shortened owing to the velocity bunching. After being seeded into the third stage for acceleration, themore » e-beam can be accelerated to a much higher energy before its energy chirp is compensated owing to the shortened e-beam length. A one-dimensional theory and two-dimensional particle-in-cell simulations have demonstrated this scheme and an e-beam with 0.2% rms energy spread and low transverse emittance could be generated without loss of charge.« less

  4. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  5. Balancing building and maintenance costs in growing transport networks

    NASA Astrophysics Data System (ADS)

    Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco

    2017-09-01

    The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.

  6. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  7. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  8. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  9. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  10. The deep layer of the tractus iliotibialis and its relevance when using the direct anterior approach in total hip arthroplasty: a cadaver study.

    PubMed

    Putzer, David; Haselbacher, Matthias; Hörmann, Romed; Klima, Günter; Nogler, Michael

    2017-12-01

    Surgical approaches through smaller incisions reveal less of the underlying anatomy, and therefore, detailed knowledge of the local anatomy and its variations is important in minimally invasive surgery. The aim of this study was to determine the location, extension, and histomorphology of the deep layer of the iliotibial band during minimally invasive hip surgery using the direct anterior approach (DAA). The morphology of the iliotibial tract was determined in this cadaver study on 40 hips with reference to the anterior superior iliac spine and the tibia. The deep layer of the tractus iliotibialis was exposed up to the hip-joint capsule and length and width measurements taken. Sections of the profound iliotibial tract were removed from the hips and the thickness of the sections was determined microscopically after staining. The superficial tractus iliotibialis had a length of 50.1 (SD 3.8) cm, while tensor fasciae latae total length was 18 (SD 2) cm [unattached 15 (SD 2.5) cm]. Length and width of the deep layer of the tractus iliotibialis were 10.4 (SD 1.3) × 3.3 (SD 0.6) cm. The deep iliotibial band always extended from the distal part of the tensor fascia latae (TFL) muscle to the lateral part of the hip capsule (mean maximum thickness 584 μm). Tractus iliotibialis deep layer morphology did not correlate to other measurements taken (body length, thigh length, and TFL length). The length of the deep layer is dependent on the TFL, since the profound part of the iliotibial band reaches from the TFL to the hip-joint capsule. The deep layer covers the hip-joint capsule, rectus, and lateral vastus muscles in the DAA interval. To access the precapsular fat pad and the hip-joint capsule, the deep layer has to be split in all approaches that use the direct anterior interval.

  11. Radiographic study of the fifth metatarsal for optimal intramedullary screw fixation of Jones fracture.

    PubMed

    Ochenjele, George; Ho, Bryant; Switaj, Paul J; Fuchs, Daniel; Goyal, Nitin; Kadakia, Anish R

    2015-03-01

    Jones fractures occur in the relatively avascular metadiaphyseal junction of the fifth metatarsal (MT), which predisposes these fractures to delayed union and nonunion. Operative treatment with intramedullary (IM) screw fixation is recommended in certain cases. Incorrect screw selection can lead to refractures, nonunion, and cortical blowout fractures. A better understanding of the anatomy of the fifth MT could aid in preoperative planning, guide screw size selection, and minimize complications. We retrospectively identified foot computed tomographic (CT) scans of 119 patients that met inclusion criteria. Using interactive 3-dimensional (3-D) models, the following measurements were calculated: MT length, "straight segment length" (distance from the base of the MT to the shaft curvature), and canal diameter. The diaphysis had a lateroplantar curvature where the medullary canal began to taper. The average straight segment length was 52 mm, and corresponded to 68% of the overall length of the MT from its proximal end. The medullary canal cross-section was elliptical rather than circular, with widest width in the sagittal plane and narrowest in coronal plane. The average coronal canal diameter at the isthmus was 5.0 mm. A coronal diameter greater than 4.5 mm at the isthmus was present in 81% of males and 74% of females. To our knowledge, this is the first anatomic description of the fifth metatarsal based on 3-D imaging. Excessive screw length could be avoided by keeping screw length less than 68% of the length of the fifth metatarsal. A greater than 4.5 mm diameter screw might be needed to provide adequate fixation for most study patients since the isthmus of the medullary canal for most were greater than 4.5 mm. Our results provide an improved understanding of the fifth metatarsal anatomy to guide screw diameter and length selection to maximize screw fixation and minimize complications. © The Author(s) 2014.

  12. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  13. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  14. Dealing with time-varying recruitment and length in Hill-type muscle models.

    PubMed

    Hamouda, Ahmed; Kenney, Laurence; Howard, David

    2016-10-03

    Hill-type muscle models are often used in muscle simulation studies and also in the design and virtual prototyping of functional electrical stimulation systems. These models have to behave in a sufficiently realistic manner when recruitment level and contractile element (CE) length change continuously. For this reason, most previous models have used instantaneous CE length in the muscle׳s force vs. length (F-L) relationship, but thereby neglect the instability problem on the descending limb (i.e. region of negative slope) of the F-L relationship. Ideally CE length at initial recruitment should be used but this requires a multiple-motor-unit muscle model to properly account for different motor-units having different initial lengths when recruited. None of the multiple-motor-unit models reported in the literature have used initial CE length in the muscle׳s F-L relationship, thereby also neglecting the descending limb instability problem. To address the problem of muscle modelling for continuously varying recruitment and length, and hence different values of initial CE length for different motor-units, a new multiple-motor-unit muscle model is presented which considers the muscle to comprise 1000 individual Hill-type virtual motor-units, which determine the total isometric force. Other parts of the model (F-V relationship and passive elements) are not dependent on the initial CE length and, therefore, they are implemented for the muscle as a whole rather than for the individual motor-units. The results demonstrate the potential errors introduced by using a single-motor-unit model and also the instantaneous CE length in the F-L relationship, both of which are common in FES control studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  16. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  17. The “jaundice hotline” for the rapid assessment of patients with jaundice

    PubMed Central

    Mitchell, Jonathan; Hussaini, Hyder; McGovern, Dermot; Farrow, Richard; Maskell, Giles; Dalton, Harry

    2002-01-01

    Problem Patients with jaundice require rapid diagnosis and treatment, yet such patients are often subject to delay. Design An open referral, rapid access jaundice clinic was established by reorganisation of existing services and without the need for significant extra resources. Background and setting A large general hospital in a largely rural and geographically isolated area. Key measures for improvement Waiting times for referral, consultation, diagnosis, and treatment, length of stay in hospital, and general practitioners' and patients' satisfaction with the service. Strategies for change Referrals were made through a 24 hour telephone answering machine and fax line. Initial assessment of patients was carried out by junior staff as part of their working week. Dedicated ultrasonography appointments were made available. Effects of change Of 107 patients seen in the first year of the service, 62 had biliary obstruction. The mean time between referral and consultation was 2.5 days. Patients who went on to endoscopic retrograde cholangiopancreatography waited 5.7 days on average. The mean length of stay in hospital in the 69 patients who were admitted was 6.1 days, compared with 11.5 days in 1996, as shown by audit data. Nearly all the 36 general practices (95%) and the 30 consecutive patients (97%) that were surveyed rated the service as above average or excellent. Lessons learnt An open referral, rapid access service for patients with jaundice can shorten time to diagnosis and treatment and length of stay in hospital. These improvements can occur through the reorganisation of existing services and with minimal extra cost. PMID:12142314

  18. Deformability analysis of sickle blood using ektacytometry.

    PubMed

    Rabai, Miklos; Detterich, Jon A; Wenby, Rosalinda B; Hernandez, Tatiana M; Toth, Kalman; Meiselman, Herbert J; Wood, John C

    2014-01-01

    Sickle cell disease (SCD) is characterized by decreased erythrocyte deformability, microvessel occlusion and severe painful infarctions of different organs. Ektacytometry of SCD red blood cells (RBC) is made difficult by the presence of rigid, poorly-deformable irreversibly sickled cells (ISC) that do not align with the fluid shear field and distort the elliptical diffraction pattern seen with normal RBC. In operation, the computer software fits an outline to the diffraction pattern, then reports an elongation index (EI) at each shear stress based on the length and width of the fitted ellipse: EI=(length-width)/(length+width). Using a commercial ektacytometer (LORCA, Mechatronics Instruments, The Netherlands) we have approached the problem of ellipse fitting in two ways: (1) altering the height of the diffraction image on a computer monitor using an aperture within the camera lens; (2) altering the light intensity level (gray level) used by the software to fit the image to an elliptical shape. Neither of these methods affected deformability results (elongation index-shear stress relations) for normal RBC but did markedly affect results for SCD erythrocytes: (1) decreasing image height by 15% and 30% increased EI at moderate to high stresses; (2) progressively increasing the light level increased EI over a wide range of stresses. Fitting data obtained at different image heights using the Lineweaver-Burke routine yielded percentage ISC results in good agreement with microscopic cell counting. We suggest that these two relatively simple approaches allow minimizing artifacts due to the presence of rigid discs or ISC and also suggest the need for additional studies to evaluate the physiological relevance of deformability data obtained via these methods.

  19. Evaluation of three-dimensional printing for internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach: a preliminary report.

    PubMed

    Zeng, Canjun; Xiao, Jidong; Wu, Zhanglin; Huang, Wenhua

    2015-01-01

    The aim of this study is to evaluate the efficacy and feasibility of three-dimensional printing (3D printing) assisted internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach. A total of 38 patients with unstable pelvic fractures were analyzed retrospectively from August 2012 to February 2014. All cases were treated operatively with internal fixation assisted by three-dimensional printing from minimal invasive para-rectus abdominis approach. Both preoperative CT and three-dimensional reconstruction were performed. Pelvic model was created by 3D printing. Data including the best entry points, plate position and direction and length of screw were obtained from simulated operation based on 3D printing pelvic model. The diaplasis and internal fixation were performed by minimal invasive para-rectus abdominis approach according to the optimized dada in real surgical procedure. Matta and Majeed score were used to evaluate currative effects after operation. According to the Matta standard, the outcome of the diaplasis achieved 97.37% with excellent and good. Majeed assessment showed 94.4% with excellent and good. The imageological examination showed consistency of internal fixation and simulated operation. The mean operation time was 110 minutes, mean intraoperative blood loss 320 ml, and mean incision length 6.5 cm. All patients have achieved clinical healing, with mean healing time of 8 weeks. Three-dimensional printing assisted internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach is feasible and effective. This method has the advantages of trauma minimally, bleeding less, healing rapidly and satisfactory reduction, and worthwhile for spreading in clinical practice.

  20. Minimally Invasive Tubular Resection of Lumbar Synovial Cysts: Report of 40 Consecutive Cases.

    PubMed

    Birch, Barry D; Aoun, Rami James N; Elbert, Gregg A; Patel, Naresh P; Krishna, Chandan; Lyons, Mark K

    2016-10-01

    Lumbar synovial cysts are a relatively common clinical finding. Surgical treatment of symptomatic synovial cysts includes computed tomography-guided aspiration, open resection and minimally invasive tubular resection. We report our series of 40 consecutive minimally invasive microscopic tubular lumbar synovial cyst resections. Following Institutional Review Board approval, a retrospective analysis of 40 cases of minimally invasive microscopic tubular retractor synovial cyst resections at a single institution by a single surgeon (B.D.B.) was conducted. Gross total resection was performed in all cases. Patient characteristics, surgical operating time, complications, and outcomes were analyzed. Lumbar radiculopathy was the presenting symptoms in all but 1 patient, who presented with neurogenic claudication. The mean duration of symptoms was 6.5 months (range, 1-25 months), mean operating time was 58 minutes (range, 25-110 minutes), and mean blood loss was 20 mL (range, 5-50 mL). Seven patients required overnight observation. The median length of stay in the remaining 33 patients was 4 hours. There were 2 cerebrospinal fluid leaks repaired directly without sequelae. The mean follow-up duration was 80.7 months. Outcomes were good or excellent in 37 of the 40 patients, fair in 1 patient, and poor in 2 patients. Minimally invasive microscopic tubular retractor resection of lumbar synovial cysts can be done safely and with comparable outcomes and complication rates as open procedures with potentially reduced operative time, length of stay, and healthcare costs. Patient selection for microscopic tubular synovial cyst resection is based in part on the anatomy of the spine and synovial cyst and is critical when recommending minimally invasive vs. open resection to patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Polymer Uncrossing and Knotting in Protein Folding, and Their Role in Minimal Folding Pathways

    PubMed Central

    Mohazab, Ali R.; Plotkin, Steven S.

    2013-01-01

    We introduce a method for calculating the extent to which chain non-crossing is important in the most efficient, optimal trajectories or pathways for a protein to fold. This involves recording all unphysical crossing events of a ghost chain, and calculating the minimal uncrossing cost that would have been required to avoid such events. A depth-first tree search algorithm is applied to find minimal transformations to fold , , , and knotted proteins. In all cases, the extra uncrossing/non-crossing distance is a small fraction of the total distance travelled by a ghost chain. Different structural classes may be distinguished by the amount of extra uncrossing distance, and the effectiveness of such discrimination is compared with other order parameters. It was seen that non-crossing distance over chain length provided the best discrimination between structural and kinetic classes. The scaling of non-crossing distance with chain length implies an inevitable crossover to entanglement-dominated folding mechanisms for sufficiently long chains. We further quantify the minimal folding pathways by collecting the sequence of uncrossing moves, which generally involve leg, loop, and elbow-like uncrossing moves, and rendering the collection of these moves over the unfolded ensemble as a multiple-transformation “alignment”. The consensus minimal pathway is constructed and shown schematically for representative cases of an , , and knotted protein. An overlap parameter is defined between pathways; we find that proteins have minimal overlap indicating diverse folding pathways, knotted proteins are highly constrained to follow a dominant pathway, and proteins are somewhere in between. Thus we have shown how topological chain constraints can induce dominant pathway mechanisms in protein folding. PMID:23365638

  2. Modification of Schrödinger-Newton equation due to braneworld models with minimal length

    NASA Astrophysics Data System (ADS)

    Bhat, Anha; Dey, Sanjib; Faizal, Mir; Hou, Chenguang; Zhao, Qin

    2017-07-01

    We study the correction of the energy spectrum of a gravitational quantum well due to the combined effect of the braneworld model with infinite extra dimensions and generalized uncertainty principle. The correction terms arise from a natural deformation of a semiclassical theory of quantum gravity governed by the Schrödinger-Newton equation based on a minimal length framework. The two fold correction in the energy yields new values of the spectrum, which are closer to the values obtained in the GRANIT experiment. This raises the possibility that the combined theory of the semiclassical quantum gravity and the generalized uncertainty principle may provide an intermediate theory between the semiclassical and the full theory of quantum gravity. We also prepare a schematic experimental set-up which may guide to the understanding of the phenomena in the laboratory.

  3. Effect of minimal length uncertainty on the mass-radius relation of white dwarfs

    NASA Astrophysics Data System (ADS)

    Mathew, Arun; Nandy, Malay K.

    2018-06-01

    Generalized uncertainty relation that carries the imprint of quantum gravity introduces a minimal length scale into the description of space-time. It effectively changes the invariant measure of the phase space through a factor (1 + βp2) - 3 so that the equation of state for an electron gas undergoes a significant modification from the ideal case. It has been shown in the literature (Rashidi 2016) that the ideal Chandrasekhar limit ceases to exist when the modified equation of state due to the generalized uncertainty is taken into account. To assess the situation in a more complete fashion, we analyze in detail the mass-radius relation of Newtonian white dwarfs whose hydrostatic equilibria are governed by the equation of state of the degenerate relativistic electron gas subjected to the generalized uncertainty principle. As the constraint of minimal length imposes a severe restriction on the availability of high momentum states, it is speculated that the central Fermi momentum cannot have values arbitrarily higher than pmax ∼β - 1 / 2. When this restriction is imposed, it is found that the system approaches limiting mass values higher than the Chandrasekhar mass upon decreasing the parameter β to a value given by a legitimate upper bound. Instead, when the more realistic restriction due to inverse β-decay is considered, it is found that the mass and radius approach the values 1.4518 M⊙ and 601.18 km near the legitimate upper bound for the parameter β.

  4. Evaluation of Brief Group-Administered Instruction for Parents to Prevent or Minimize Sleep Problems in Young Children with Down Syndrome

    ERIC Educational Resources Information Center

    Stores, Rebecca; Stores, Gregory

    2004-01-01

    Background: The study concerns the unknown value of group instruction for mothers of young children with Down syndrome (DS) in preventing or minimizing sleep problems. Method: (1) Children with DS were randomly allocated to an Instruction group (given basic information about children's sleep) and a Control group for later comparison including…

  5. Does Self-Help Increase Rates of Help Seeking for Student Mental Health Problems by Minimizing Stigma as a Barrier?

    ERIC Educational Resources Information Center

    Levin, Michael E.; Krafft, Jennifer; Levin, Crissa

    2018-01-01

    Objective: This study examined whether self-help (books, websites, mobile apps) increases help seeking for mental health problems among college students by minimizing stigma as a barrier. Participants and Methods: A survey was conducted with 200 college students reporting elevated distress from February to April 2017. Results: Intentions to use…

  6. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  7. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  8. A videoscope for use in minimally invasive periodontal surgery.

    PubMed

    Harrel, Stephen K; Wilson, Thomas G; Rivera-Hidalgo, Francisco

    2013-09-01

    Minimally invasive periodontal procedures have been reported to produce excellent clinical results. Visualization during minimally invasive procedures has traditionally been obtained by the use of surgical telescopes, surgical microscopes, glass fibre endoscopes or a combination of these devices. All of these methods for visualization are less than fully satisfactory due to problems with access, magnification and blurred imaging. A videoscope for use with minimally invasive periodontal procedures has been developed to overcome some of the difficulties that exist with current visualization approaches. This videoscope incorporates a gas shielding technology that eliminates the problems of fogging and fouling of the optics of the videoscope that has previously prevented the successful application of endoscopic visualization to periodontal surgery. In addition, as part of the gas shielding technology the videoscope also includes a moveable retractor specifically adapted for minimally invasive surgery. The clinical use of the videoscope during minimally invasive periodontal surgery is demonstrated and discussed. The videoscope with gas shielding alleviates many of the difficulties associated with visualization during minimally invasive periodontal surgery. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  10. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  11. Time and length scales within a fire and implications for numerical simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TIESZEN,SHELDON R.

    2000-02-02

    A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less

  12. Adoption of waste minimization technology to benefit electroplaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K.

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution controlmore » measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.« less

  13. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    PubMed

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  15. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  16. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  17. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  18. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  19. Sonic-boom minimization.

    NASA Technical Reports Server (NTRS)

    Seebass, R.; George, A. R.

    1972-01-01

    There have been many attempts to reduce or eliminate the sonic boom. Such attempts fall into two categories: (1) aerodynamic minimization and (2) exotic configurations. In the first category changes in the entropy and the Bernoulli constant are neglected and equivalent body shapes required to minimize the overpressure, the shock pressure rise and the impulse are deduced. These results include the beneficial effects of atmospheric stratification. In the second category, the effective length of the aircraft is increased or its base area decreased by modifying the Bernoulli constant a significant fraction of the flow past the aircraft. A figure of merit is introduced which makes it possible to judge the effectiveness of the latter schemes.

  20. Minimal scales from an extended Hilbert space

    NASA Astrophysics Data System (ADS)

    Kober, Martin; Nicolini, Piero

    2010-12-01

    We consider an extension of the conventional quantum Heisenberg algebra, assuming that coordinates as well as momenta fulfil nontrivial commutation relations. As a consequence, a minimal length and a minimal mass scale are implemented. Our commutators do not depend on positions and momenta and we provide an extension of the coordinate coherent state approach to noncommutative geometry. We explore, as a toy model, the corresponding quantum field theory in a (2+1)-dimensional spacetime. Then we investigate the more realistic case of a (3+1)-dimensional spacetime, foliated into noncommutative planes. As a result, we obtain propagators, which are finite in the ultraviolet as well as the infrared regime.

  1. Cortical Composition Hierarchy Driven by Spine Proportion Economical Maximization or Wire Volume Minimization

    PubMed Central

    Karbowski, Jan

    2015-01-01

    The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3)2, and capillaries around (1/3)4. Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays) are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called “spine economy maximization” is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest that for the efficiency of local circuits wire volume may be more primary variable than wire length or temporal delays, and moreover, the new spine economy principle may be important for brain evolutionary design in a broader context. PMID:26436731

  2. Treatment of acute and closed Achilles tendon ruptures by minimally invasive tenocutaneous suturing.

    PubMed

    Ding, Wenge; Yan, Weihong; Zhu, Yaping; Liu, Zhiwei

    2012-09-01

    Achilles tendon rupture is a common injury, and its complications can impair function. Numerous operations have been described for reconstructing the ruptured tendon, but these methods can compromise microcirculation in the tendon and can seriously impair its healing. Suturing with a minimally invasive tenocutaneous technique soon after the rupture and systematic functional exercise can greatly reduce the possibility of complications. Between June 1996 and February 2009, we treated 88 patients (54 males; age range, 21-66 years) with this method. After follow-up ranging from 1-7 years, the mean American Orthopedic Foot and Ankle Society ankle-hind foot score was 95 (range, 90-98), and the maximum length of postoperative scarring was 3 cm. One patient re-ruptured his Achilles tendon one year after surgery in an accident, but after 10 months, the repaired tendon was still intact. In another patient, the nervus suralis was damaged during surgery by piercing the tension suture at the near end, causing postoperative numbness and swelling. The tension suture was quickly removed, and the patient recovered well with conservative treatment. No large irregular scars, such as those sustained during immobilization, were present over the Achilles tendon. Minimally invasive percutaneous suturing can restore the original length and continuity of the Achilles tendon, is minimally invasive, and has fewer postoperative complications than other methods.

  3. Task-specific modulation of adult humans' tool preferences: number of choices and size of the problem.

    PubMed

    Silva, Kathleen M; Gross, Thomas J; Silva, Francisco J

    2015-03-01

    In two experiments, we examined the effect of modifications to the features of a stick-and-tube problem on the stick lengths that adult humans used to solve the problem. In Experiment 1, we examined whether people's tool preferences for retrieving an out-of-reach object in a tube might more closely resemble those reported with laboratory crows if people could modify a single stick to an ideal length to solve the problem. Contrary to when adult humans have selected a tool from a set of ten sticks, asking people to modify a single stick to retrieve an object did not generally result in a stick whose length was related to the object's distance. Consistent with the prior research, though, the working length of the stick was related to the object's distance. In Experiment 2, we examined the effect of increasing the scale of the stick-and-tube problem on people's tool preferences. Increasing the scale of the task influenced people to select relatively shorter tools than had selected in previous studies. Although the causal structures of the tasks used in the two experiments were identical, their results were not. This underscores the necessity of studying physical cognition in relation to a particular causal structure by using a variety of tasks and methods.

  4. Geometric versus numerical optimal control of a dissipative spin-(1/2) particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapert, M.; Sugny, D.; Zhang, Y.

    2010-12-15

    We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derrida, B.; Spohn, H.

    We show that the problem of a directed polymer on a tree with disorder can be reduced to the study of nonlinear equations of reaction-diffusion type. These equations admit traveling wave solutions that move at all possible speeds above a certain minimal speed. The speed of the wavefront is the free energy of the polymer problem and the minimal speed corresponds to a phase transition to a glassy phase similar to the spin-glass phase. Several properties of the polymer problem can be extracted from the correspondence with the traveling wave: probability distribution of the free energy, overlaps, etc.

  6. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  7. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  8. Performance improvement: one model to reduce length of stay.

    PubMed

    Chisari, E; Mele, J A

    1994-01-01

    Dedicated quality professionals are tired of quick fixes, Band-Aids, and other first-aid strategies that offer only temporary relief of nagging problems rather than a long-term cure. Implementing strategies that can produce permanent solutions to crucial problems is a challenge confronted by organizations striving for continuous performance improvement. One vehicle, driven by data and customer requirements, that can help to solve problems and sustain success over time is the storyboard. This article illustrates the use of the storyboard as the framework for reducing length of stay--one of the most important problems facing healthcare organizations today.

  9. Accurate distance determination of nucleic acids via Förster resonance energy transfer: implications of dye linker length and rigidity.

    PubMed

    Sindbert, Simon; Kalinin, Stanislav; Nguyen, Hien; Kienzler, Andrea; Clima, Lilia; Bannwarth, Willi; Appel, Bettina; Müller, Sabine; Seidel, Claus A M

    2011-03-02

    In Förster resonance energy transfer (FRET) experiments, the donor (D) and acceptor (A) fluorophores are usually attached to the macromolecule of interest via long flexible linkers of up to 15 Å in length. This causes significant uncertainties in quantitative distance measurements and prevents experiments with short distances between the attachment points of the dyes due to possible dye-dye interactions. We present two approaches to overcome the above problems as demonstrated by FRET measurements for a series of dsDNA and dsRNA internally labeled with Alexa488 and Cy5 as D and A dye, respectively. First, we characterize the influence of linker length and flexibility on FRET for different dye linker types (long, intermediate, short) by analyzing fluorescence lifetime and anisotropy decays. For long linkers, we describe a straightforward procedure that allows for very high accuracy of FRET-based structure determination through proper consideration of the position distribution of the dye and of linker dynamics. The position distribution can be quickly calculated with geometric accessible volume (AV) simulations, provided that the local structure of RNA or DNA in the proximity of the dye is known and that the dye diffuses freely in the sterically allowed space. The AV approach provides results similar to molecular dynamics simulations (MD) and is fully consistent with experimental FRET data. In a benchmark study for ds A-RNA, an rmsd value of 1.3 Å is achieved. Considering the case of undefined dye environments or very short DA distances, we introduce short linkers with a propargyl or alkenyl unit for internal labeling of nucleic acids to minimize position uncertainties. Studies by ensemble time correlated single photon counting and single-molecule detection show that the nature of the linker strongly affects the radius of the dye's accessible volume (6-16 Å). For short propargyl linkers, heterogeneous dye environments are observed on the millisecond time scale. A detailed analysis of possible orientation effects (κ(2) problem) indicates that, for short linkers and unknown local environments, additional κ(2)-related uncertainties are clearly outweighed by better defined dye positions.

  10. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  11. Principal Eigenvalue Minimization for an Elliptic Problem with Indefinite Weight and Robin Boundary Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hintermueller, M., E-mail: hint@math.hu-berlin.de; Kao, C.-Y., E-mail: Ckao@claremontmckenna.edu; Laurain, A., E-mail: laurain@math.hu-berlin.de

    2012-02-15

    This paper focuses on the study of a linear eigenvalue problem with indefinite weight and Robin type boundary conditions. We investigate the minimization of the positive principal eigenvalue under the constraint that the absolute value of the weight is bounded and the total weight is a fixed negative constant. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for a species to survive. For rectangular domains with Neumann boundary condition, it is known that there exists a threshold value such that if the total weight is below this thresholdmore » value then the optimal favorable region is like a section of a disk at one of the four corners; otherwise, the optimal favorable region is a strip attached to the shorter side of the rectangle. Here, we investigate the same problem with mixed Robin-Neumann type boundary conditions and study how this boundary condition affects the optimal spatial arrangement.« less

  12. Minimizing the Sum of Completion Times with Resource Dependant Times

    NASA Astrophysics Data System (ADS)

    Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe

    2008-10-01

    We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.

  13. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  14. Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.

  15. Minimal perceptrons for memorizing complex patterns

    NASA Astrophysics Data System (ADS)

    Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo

    2016-11-01

    Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.

  16. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  17. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  18. On entropic uncertainty relations in the presence of a minimal length

    NASA Astrophysics Data System (ADS)

    Rastegin, Alexey E.

    2017-07-01

    Entropic uncertainty relations for the position and momentum within the generalized uncertainty principle are examined. Studies of this principle are motivated by the existence of a minimal observable length. Then the position and momentum operators satisfy the modified commutation relation, for which more than one algebraic representation is known. One of them is described by auxiliary momentum so that the momentum and coordinate wave functions are connected by the Fourier transform. However, the probability density functions of the physically true and auxiliary momenta are different. As the corresponding entropies differ, known entropic uncertainty relations are changed. Using differential Shannon entropies, we give a state-dependent formulation with correction term. State-independent uncertainty relations are obtained in terms of the Rényi entropies and the Tsallis entropies with binning. Such relations allow one to take into account a finiteness of measurement resolution.

  19. Comment on ``Minimal size of a barchan dune''

    NASA Astrophysics Data System (ADS)

    Andreotti, B.; Claudin, P.

    2007-12-01

    It is now an accepted fact that the size at which dunes form from a flat sand bed as well as their “minimal size” scales on the flux saturation length. This length is by definition the relaxation length of the slowest mode toward equilibrium transport. The model presented by Parteli, Durán, and Herrmann [Phys. Rev. E 75, 011301 (2007)] predicts that the saturation length decreases to zero as the inverse of the wind shear stress far from the threshold. We first show that their model is not self-consistent: even under large wind, the relaxation rate is limited by grain inertia and thus cannot decrease to zero. A key argument presented by these authors comes from the discussion of the typical dune wavelength on Mars (650 m) on the basis of which they refute the scaling of the dune size with the drag length evidenced by Claudin and Andreotti [Earth Planet. Sci. Lett. 252, 30 (2006)]. They instead propose that Martian dunes, composed of large grains (500μm) , were formed in the past under very strong winds. We emphasize that this saltating grain size, estimated from thermal diffusion measurements, is far from straightforward. Moreover, the microscopic photographs taken by the rovers on Martian Aeolian bedforms show a grain size of 87±25μm together with hematite spherules at millimeter scale. As those so-called “blueberries” cannot be entrained more frequently than a few hours per century, we conclude that the saltating grains on Mars are the small ones, which gives a second strong argument against the model of Parteli

  20. Comment on "Minimal size of a barchan dune".

    PubMed

    Andreotti, B; Claudin, P

    2007-12-01

    It is now an accepted fact that the size at which dunes form from a flat sand bed as well as their "minimal size" scales on the flux saturation length. This length is by definition the relaxation length of the slowest mode toward equilibrium transport. The model presented by Parteli, Durán, and Herrmann [Phys. Rev. E 75, 011301 (2007)] predicts that the saturation length decreases to zero as the inverse of the wind shear stress far from the threshold. We first show that their model is not self-consistent: even under large wind, the relaxation rate is limited by grain inertia and thus cannot decrease to zero. A key argument presented by these authors comes from the discussion of the typical dune wavelength on Mars (650 m) on the basis of which they refute the scaling of the dune size with the drag length evidenced by Claudin and Andreotti [Earth Planet. Sci. Lett. 252, 30 (2006)]. They instead propose that Martian dunes, composed of large grains (500 microm), were formed in the past under very strong winds. We emphasize that this saltating grain size, estimated from thermal diffusion measurements, is far from straightforward. Moreover, the microscopic photographs taken by the rovers on Martian Aeolian bedforms show a grain size of 87+/-25 microm together with hematite spherules at millimeter scale. As those so-called "blueberries" cannot be entrained more frequently than a few hours per century, we conclude that the saltating grains on Mars are the small ones, which gives a second strong argument against the model of Parteli.

  1. Dorsal buccal mucosal graft urethroplasty by a ventral sagittal urethrotomy and minimal-access perineal approach for anterior urethral stricture.

    PubMed

    Gupta, N P; Ansari, M S; Dogra, P N; Tandon, S

    2004-06-01

    To present the technique of dorsal buccal mucosal graft urethroplasty through a ventral sagittal urethrotomy and minimal access perineal approach for anterior urethral stricture. From July 2001 to December 2002, 12 patients with a long anterior urethral stricture had the anterior urethra reconstructed, using a one-stage urethroplasty with a dorsal onlay buccal mucosal graft through a ventral sagittal urethrotomy. The urethra was approached via a small perineal incision irrespective of the site and length of the stricture. The penis was everted through the perineal wound. No urethral dissection was used on laterally or dorsally, so as not to jeopardize the blood supply. The mean (range) length of the stricture was 5 (3-16) cm and the follow-up 12 (10-16) months. The results were good in 11 of the 12 patients. One patient developed a stricture at the proximal anastomotic site and required optical internal urethrotomy. Dorsal buccal mucosal graft urethroplasty via a minimal access perineal approach is a simple technique with a good surgical outcome; it does not require urethral dissection and mobilization and hence preserves the blood supply.

  2. Laparoscopic pancreatectomy: Indications and outcomes

    PubMed Central

    Liang, Shuyin; Hameed, Usmaan; Jayaraman, Shiva

    2014-01-01

    The application of minimally invasive approaches to pancreatic resection for benign and malignant diseases has been growing in the last two decades. Studies have demonstrated that laparoscopic distal pancreatectomy (LDP) is feasible and safe, and many of them show that compared to open distal pancreatectomy, LDP has decreased blood loss and length of hospital stay, and equivalent post-operative complication rates and short-term oncologic outcomes. LDP is becoming the procedure of choice for benign or small low-grade malignant lesions in the distal pancreas. Minimally invasive pancreaticoduodenectomy (MIPD) has not yet been widely adopted. There is no clear evidence in favor of MIPD over open pancreaticoduodenectomy in operative time, blood loss, length of stay or rate of complications. Robotic surgery has recently been applied to pancreatectomy, and many of the advantages of laparoscopy over open surgery have been observed in robotic surgery. Laparoscopic enucleation is considered safe for patients with small, benign or low-grade malignant lesions of the pancreas that is amenable to parenchyma-preserving procedure. As surgeons’ experience with advanced laparoscopic and robotic skills has been growing around the world, new innovations and breakthrough in minimally invasive pancreatic procedures will evolve. PMID:25339811

  3. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  4. The Photodynamic Bone Stabilization System: a minimally invasive, percutaneous intramedullary polymeric osteosynthesis for simple and complex long bone fractures.

    PubMed

    Vegt, Paul; Muir, Jeffrey M; Block, Jon E

    2014-01-01

    The treatment of osteoporotic long bone fractures is difficult due to diminished bone density and compromised biomechanical integrity. The majority of osteoporotic long bone fractures occur in the metaphyseal region, which poses additional problems for surgical repair due to increased intramedullary volume. Treatment with internal fixation using intramedullary nails or plating is associated with poor clinical outcomes in this patient population. Subsequent fractures and complications such as screw pull-out necessitate additional interventions, prolonging recovery and increasing health care costs. The Photodynamic Bone Stabilization System (PBSS) is a minimally invasive surgical technique that allows clinicians to repair bone fractures using a light-curable polymer contained within an inflatable balloon catheter, offering a new treatment option for osteoporotic long bone fractures. The unique polymer compound and catheter application provides a customizable solution for long bone fractures that produces internal stability while maintaining bone length, rotational alignment, and postsurgical mobility. The PBSS has been utilized in a case series of 41 fractures in 33 patients suffering osteoporotic long bone fractures. The initial results indicate that the use of the light-cured polymeric rod for this patient population provides excellent fixation and stability in compromised bone, with a superior complication profile. This paper describes the clinical uses, procedural details, indications for use, and the initial clinical findings of the PBSS.

  5. The Photodynamic Bone Stabilization System: a minimally invasive, percutaneous intramedullary polymeric osteosynthesis for simple and complex long bone fractures

    PubMed Central

    Vegt, Paul; Muir, Jeffrey M; Block, Jon E

    2014-01-01

    The treatment of osteoporotic long bone fractures is difficult due to diminished bone density and compromised biomechanical integrity. The majority of osteoporotic long bone fractures occur in the metaphyseal region, which poses additional problems for surgical repair due to increased intramedullary volume. Treatment with internal fixation using intramedullary nails or plating is associated with poor clinical outcomes in this patient population. Subsequent fractures and complications such as screw pull-out necessitate additional interventions, prolonging recovery and increasing health care costs. The Photodynamic Bone Stabilization System (PBSS) is a minimally invasive surgical technique that allows clinicians to repair bone fractures using a light-curable polymer contained within an inflatable balloon catheter, offering a new treatment option for osteoporotic long bone fractures. The unique polymer compound and catheter application provides a customizable solution for long bone fractures that produces internal stability while maintaining bone length, rotational alignment, and postsurgical mobility. The PBSS has been utilized in a case series of 41 fractures in 33 patients suffering osteoporotic long bone fractures. The initial results indicate that the use of the light-cured polymeric rod for this patient population provides excellent fixation and stability in compromised bone, with a superior complication profile. This paper describes the clinical uses, procedural details, indications for use, and the initial clinical findings of the PBSS. PMID:25540600

  6. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  7. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  8. Minimizing inner product data dependencies in conjugate gradient iteration

    NASA Technical Reports Server (NTRS)

    Vanrosendale, J.

    1983-01-01

    The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).

  9. A Study on Improvement of Machining Precision in a Medical Milling Robot

    NASA Astrophysics Data System (ADS)

    Sugita, Naohiko; Osa, Takayuki; Nakajima, Yoshikazu; Mori, Masahiko; Saraie, Hidenori; Mitsuishi, Mamoru

    Minimal invasiveness and increasing of precision have recently become important issues in orthopedic surgery. The femur and tibia must be cut precisely for successful knee arthroplasty. The recent trend towards Minimally Invasive Surgery (MIS) has increased surgical difficulty since the incision length and open access area are small. In this paper, the result of deformation analysis of the robot and an active compensation method of robot deformation, which is based on an error map, are proposed and evaluated.

  10. Competitive two-agent scheduling problems to minimize the weighted combination of makespans in a two-machine open shop

    NASA Astrophysics Data System (ADS)

    Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia

    2018-04-01

    In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.

  11. Posterior retroperitoneoscopic adrenalectomy: outcomes and lessons learned from initial 50 cases.

    PubMed

    Cabalag, Miguel S; Mann, G Bruce; Gorelik, Alexandra; Miller, Julie A

    2015-06-01

    Posterior retroperitoneoscopic adrenalectomy (PRA) is an alternative approach to minimally invasive adrenalectomy, potentially offering less pain and faster recovery compared with laparoscopic transperitoneal adrenalectomy (LA). The authors have recently changed from LA to PRA in suitable patients and audited their first 50 cases. Data were prospectively collected for 50 consecutive PRAs performed by the same surgeon. Patient demographics, tumour characteristics, analgesia use, operative and preparation time, length of stay, and complications were recorded. Fifty adrenalectomies were performed in 49 patients. The median (range) age was 58.5 years (30-83) and the majority of patients were female (n = 33, 66.0%). The median (interquartile range (IQR)) preparation time was 35.5 (28.5-50.0) and median operation time was 70.5 (54-85) min, which decreased during the study period. After a learning curve of 15 cases, median operative time reached 61 min. PRA patients required minimal post-operative analgesia, with a median (IQR) of 0 (0-5) mg of intravenous morphine equivalent used. The median (IQR) length of stay was 1 (1-1) day, with 8 (16.0%) same-day discharges. There were four complications: one blood pressure lability from a phaeochromocytoma, one reintubation, one self-limited bleed and one temporary subcostal neuropraxia. There were no conversions to open surgery or deaths. Our results support previously published findings that PRA is a safe procedure, with a relatively short learning curve, resulting in minimal post-operative analgesia use and short length of hospital stay. © 2014 Royal Australasian College of Surgeons.

  12. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  13. A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems. Ph.D. Thesis - Massachusetts Inst. of Technology, Aug. 1991

    NASA Technical Reports Server (NTRS)

    Nachtigal, Noel M.

    1991-01-01

    The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.

  14. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics.

    PubMed

    Frisk, Mikael; Jonsson, Annie; Sellman, Stefan; Flisberg, Patrik; Rönnqvist, Mikael; Wennergren, Uno

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact.

  15. Route optimization as an instrument to improve animal welfare and economics in pre-slaughter logistics

    PubMed Central

    2018-01-01

    Each year, more than three million animals are transported from farms to abattoirs in Sweden. Animal transport is related to economic and environmental costs and a negative impact on animal welfare. Time and the number of pick-up stops between farms and abattoirs are two key parameters for animal welfare. Both are highly dependent on efficient and qualitative transportation planning, which may be difficult if done manually. We have examined the benefits of using route optimization in cattle transportation planning. To simulate the effects of various planning time windows and transportation time regulations and number of pick-up stops along each route, we have used data that represent one year of cattle transport. Our optimization model is a development of a model used in forestry transport that solves a general pick-up and delivery vehicle routing problem. The objective is to minimize transportation costs. We have shown that the length of the planning time window has a significant impact on the animal transport time, the total driving time and the total distance driven; these parameters that will not only affect animal welfare but also affect the economy and environment in the pre-slaughter logistic chain. In addition, we have shown that changes in animal transportation regulations, such as minimizing the number of allowed pick-up stops on each route or minimizing animal transportation time, will have positive effects on animal welfare measured in transportation hours and number of pick-up stops. However, this leads to an increase in working time and driven distances, leading to higher transportation costs for the transport and negative environmental impact. PMID:29513704

  16. The Role of Design-of-Experiments in Managing Flow in Compact Air Vehicle Inlets

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Miller, Daniel N.; Gridley, Marvin C.; Agrell, Johan

    2003-01-01

    It is the purpose of this study to demonstrate the viability and economy of Design-of-Experiments methodologies to arrive at microscale secondary flow control array designs that maintain optimal inlet performance over a wide range of the mission variables and to explore how these statistical methods provide a better understanding of the management of flow in compact air vehicle inlets. These statistical design concepts were used to investigate the robustness properties of low unit strength micro-effector arrays. Low unit strength micro-effectors are micro-vanes set at very low angles-of-incidence with very long chord lengths. They were designed to influence the near wall inlet flow over an extended streamwise distance, and their advantage lies in low total pressure loss and high effectiveness in managing engine face distortion. The term robustness is used in this paper in the same sense as it is used in the industrial problem solving community. It refers to minimizing the effects of the hard-to-control factors that influence the development of a product or process. In Robustness Engineering, the effects of the hard-to-control factors are often called noise , and the hard-to-control factors themselves are referred to as the environmental variables or sometimes as the Taguchi noise variables. Hence Robust Optimization refers to minimizing the effects of the environmental or noise variables on the development (design) of a product or process. In the management of flow in compact inlets, the environmental or noise variables can be identified with the mission variables. Therefore this paper formulates a statistical design methodology that minimizes the impact of variations in the mission variables on inlet performance and demonstrates that these statistical design concepts can lead to simpler inlet flow management systems.

  17. Free time minimizers for the three-body problem

    NASA Astrophysics Data System (ADS)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  18. [Minimal emotional dysfunction and first impression formation in personality disorders].

    PubMed

    Linden, M; Vilain, M

    2011-01-01

    "Minimal cerebral dysfunctions" are isolated impairments of basic mental functions, which are elements of complex functions like speech. The best described are cognitive dysfunctions such as reading and writing problems, dyscalculia, attention deficits, but also motor dysfunctions such as problems with articulation, hyperactivity or impulsivity. Personality disorders can be characterized by isolated emotional dysfunctions in relation to emotional adequacy, intensity and responsivity. For example, paranoid personality disorders can be characterized by continuous and inadequate distrust, as a disorder of emotional adequacy. Schizoid personality disorders can be characterized by low expressive emotionality, as a disorder of effect intensity, or dissocial personality disorders can be characterized by emotional non-responsivity. Minimal emotional dysfunctions cause interactional misunderstandings because of the psychology of "first impression formation". Studies have shown that in 100 ms persons build up complex and lasting emotional judgements about other persons. Therefore, minimal emotional dysfunctions result in interactional problems and adjustment disorders and in corresponding cognitive schemata.From the concept of minimal emotional dysfunctions specific psychotherapeutic interventions in respect to the patient-therapist relationship, the diagnostic process, the clarification of emotions and reality testing, and especially an understanding of personality disorders as impairment and "selection, optimization, and compensation" as a way of coping can be derived.

  19. Comparison on different repetition rate locking methods in Er-doped fiber laser

    NASA Astrophysics Data System (ADS)

    Yang, Kangwen; Zhao, Peng; Luo, Jiang; Huang, Kun; Hao, Qiang; Zeng, Heping

    2018-05-01

    We demonstrate a systematic comparative research on the all-optical, mechanical and opto-mechanical repetition rate control methods in an Er-doped fiber laser. A piece of Yb-doped fiber, a piezoelectric transducer and an electronic polarization controller are simultaneously added in the laser cavity as different cavity length modulators. By measuring the cavity length tuning ranges, the output power fluctuations, the temporal and frequency repetition rate stability, we show that all-optical method introduces the minimal disturbances under current experimental condition.

  20. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  1. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  2. Generating effective project scheduling heuristics by abstraction and reconstitution

    NASA Technical Reports Server (NTRS)

    Janakiraman, Bhaskar; Prieditis, Armand

    1992-01-01

    A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.

  3. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    NASA Astrophysics Data System (ADS)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  4. Designing safety into the minimally invasive surgical revolution: a commentary based on the Jacques Perissat Lecture of the International Congress of the European Association for Endoscopic Surgery.

    PubMed

    Clarke, John R

    2009-01-01

    Surgical errors with minimally invasive surgery differ from those in open surgery. Perforations are typically the result of trocar introduction or electrosurgery. Infections include bioburdens, notably enteric viruses, on complex instruments. Retained foreign objects are primarily unretrieved device fragments and lost gallstones or other specimens. Fires and burns come from illuminated ends of fiber-optic cables and from electrosurgery. Pressure ischemia is more likely with longer endoscopic surgical procedures. Gas emboli can occur. Minimally invasive surgery is more dependent on complex equipment, with high likelihood of failures. Standardization, checklists, and problem reporting are solutions for minimizing failures. The necessity of electrosurgery makes education about best electrosurgical practices important. The recording of minimally invasive surgical procedures is an opportunity to debrief in a way that improves the reliability of future procedures. Safety depends on reliability, designing systems to withstand inevitable human errors. Safe systems are characterized by a commitment to safety, formal protocols for communications, teamwork, standardization around best practice, and reporting of problems for improvement of the system. Teamwork requires shared goals, mental models, and situational awareness in order to facilitate mutual monitoring and backup. An effective team has a flat hierarchy; team members are empowered to speak up if they are concerned about problems. Effective teams plan, rehearse, distribute the workload, and debrief. Surgeons doing minimally invasive surgery have a unique opportunity to incorporate the principles of safety into the development of their discipline.

  5. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  6. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  7. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE PAGES

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    2016-05-25

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  8. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  9. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  10. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  11. Delivery Path Length and Holding Tree Minimization Method of Securities Delivery among the Registration Agencies Connected as Non-Tree

    NASA Astrophysics Data System (ADS)

    Shimamura, Atsushi; Moritsu, Toshiyuki; Someya, Harushi

    To dematerialize the securities such as stocks or cooporate bonds, the securities were registered to account in the registration agencies which were connected as tree. This tree structure had the advantage in the management of the securities those were issued large amount and number of brands of securities were limited. But when the securities such as account receivables or advance notes are dematerialized, number of brands of the securities increases extremely. In this case, the management of securities with tree structure becomes very difficult because of the concentration of information to root of the tree. To resolve this problem, using the graph structure is assumed instead of the tree structure. When the securities are kept with tree structure, the delivery path of securities is unique, but when securities are kept with graph structure, path of delivery is not unique. In this report, we describe the requirement of the delivery path of securities, and we describe selecting method of the path.

  12. Gasdynamic Mirror Fusion Propulsion Experiment

    NASA Technical Reports Server (NTRS)

    Emrich, William J., Jr.; Rodgers, Stephen L. (Technical Monitor)

    2001-01-01

    Nuclear fusion appears to be the most promising concept for producing extremely high specific impulse rocket engines. One particular fusion concept which seems to be particularly well suited for fusion propulsion applications is the gasdynamic mirror (GDM). This device would operate at much higher plasma densities and with much larger LD ratios than previous mirror machines. Several advantages accrue from such a design. First, the high LA:) ratio minimizes to a large extent certain magnetic curvature effects which lead to plasma instabilities causing a loss of plasma confinement. Second, the high plasma density will result in the plasma behaving much more Re a conventional fluid with a mean free path shorter than the length of the device. This characteristic helps reduce problems associated with "loss cone" microinstabilities. An experimental GDM device is currently being constructed at the NASA Marshall Space Flight Center to provide an initial assessment of the feasibility of this type of propulsion system. Initial experiments are expected to commence in the late fall of 2000.

  13. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  14. Scaling and entropy in p-median facility location along a line

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.

    2011-09-01

    The p-median problem is a common model for optimal facility location. The task is to place p facilities (e.g., warehouses or schools) in a heterogeneously populated space such that the average distance from a person's home to the nearest facility is minimized. Here we study the special case where the population lives along a line (e.g., a road or a river). If facilities are optimally placed, the length of the line segment served by a facility is inversely proportional to the square root of the population density. This scaling law is derived analytically and confirmed for concrete numerical examples of three US interstate highways and the Mississippi River. If facility locations are permitted to deviate from the optimum, the number of possible solutions increases dramatically. Using Monte Carlo simulations, we compute how scaling is affected by an increase in the average distance to the nearest facility. We find that the scaling exponents change and are most sensitive near the optimum facility distribution.

  15. The psychiatric nurse specialist: a valuable asset in the general hospital.

    PubMed

    Fife, B; Lemler, S

    1983-04-01

    In summary, what are the ways in which the psychiatric/mental clinical specialist contributes to cost-effectiveness, the professional growth of nursing staff, and quality patient care in the general hospital setting? All services of the psychiatric/mental health clinical specialist are ultimately directed toward increasing the effectiveness with which staff can deliver care. This goal is accomplished by helping staff nurses maximize their knowledge, by providing needed educational opportunities, by promoting the use of a holistic model of care, and by helping staff cope with their own stress. In our experience, high quality care that meets the physiological, psychological, and sociological needs of patients decreases the length of the hospital stay, prevents repeated hospitalizations, and minimizes the development of psychosocial problems secondary to the illness. With the necessary support and cooperation from administration, this clinical specialist role reduces health care costs, promotes a higher level of functioning in patients and their families, and increases the level of job satisfaction for the staff who provide direct bedside care.

  16. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  17. [Effectiveness of intermittent pneumatic compression (IPC) on thrombosis prophylaxis: a systematic literature review].

    PubMed

    Rohrer, Ottilia; Eicher, Manuela

    2006-06-01

    Despite changes in patient demographics and short-ened length of hospital stay deep vein thrombosis (DVT) remains a major health care problem which may lead to a variety of other high risk complications. Current treatment guidelines focus on preventive measures. Beside drug therapy, physical measures executed by nursing professionals exist, the outcomes of which are discussed controversially. Based on 25 studies that were found in MEDLINE and the Cochrane library, this systematic literature review identifies the effectiveness of intermittent pneumatic compression (IPC) on thrombosis prophylaxis. In almost all medical settings IPC contributes to a significant reduction of the incidence of DVT. At the same time, IPC has minimal negative side effects and is also cost effective. Correct application of IPC and patient compliance are essential to achieve its effectiveness. An increased awareness within the healthcare team in identifying the risk for and implementing measures against DVT is needed. Guidelines need to be developed in order to improve the effectiveness of thrombosis prophylaxis with the implementation of IPC.

  18. Preparation and properties of pure, full-length IclR protein of Escherichia coli. Use of time-of-flight mass spectrometry to investigate the problems encountered.

    PubMed Central

    Donald, L. J.; Chernushevich, I. V.; Zhou, J.; Verentchikov, A.; Poppe-Schriemer, N.; Hosfield, D. J.; Westmore, J. B.; Ens, W.; Duckworth, H. W.; Standing, K. G.

    1996-01-01

    IclR protein, the repressor of the aceBAK operon of Escherichia coli, has been examined by time-of-flight mass spectrometry, with ionization by matrix assisted laser desorption or by electrospray. The purified protein was found to have a smaller mass than that predicted from the base sequence of the cloned iclR gene. Additional measurements were made on mixtures of peptides derived from IclR by treatment with trypsin and cyanogen bromide. They showed that the amino acid sequence is that predicted from the gene sequence, except that the protein has suffered truncation by removal of the N-terminal eight or, in some cases, nine amino acid residues. The peptide bond whose hydrolysis would remove eight residues is a typical target for the E. coli protease OmpT. We find that, by taking precautions to minimize Omp T proteolysis, or by eliminating it through mutation of the host strain, we can isolate full-length IclR protein (lacking only the N-terminal methionine residue). Full-length IclR is a much better DNA-binding protein than the truncated versions: it binds the aceBAK operator sequence 44-fold more tightly, presumably because of additional contacts that the N-terminal residues make with the DNA. Our experience thus demonstrates the advantages of using mass spectrometry to characterize newly purified proteins produced from cloned genes, especially where proteolysis or other covalent modification is a concern. This technique gives mass spectra from complex peptide mixtures that can be analyzed completely, without any fractionation of the mixtures, by reference to the amino acid sequence inferred from the base sequence of the cloned gene. PMID:8844850

  19. Coping and sickness absence

    PubMed Central

    Schaufeli, Wilmar B.; van Dijk, Frank J. H.; Blonk, Roland W. B.

    2007-01-01

    Objectives The aim of this study is to examine the role of coping styles in sickness absence. In line with findings that contrast the reactive–passive focused strategies, problem-solving strategies are generally associated with positive results in terms of well-being and overall health outcomes; our hypothesis is that such strategies are positively related to a low frequency of sickness absence and with short lengths (total number of days absent) and durations (mean duration per spell). Methods Using a prospective design, employees’ (N = 3,628) responses on a self-report coping inventory are used to predict future registered sickness absence (i.e. frequency, length, duration, and median time before the onset of a new sick leave period). Results and conclusions In accordance with our hypothesis, and after adjustment for potential confounders, employees with an active problem-solving coping strategy are less likely to drop out because of sickness absence in terms of frequency, length (longer than 14 days), and duration (more than 7 days) of sickness absence. This positive effect is observed in the case of seeking social support only for the duration of sickness absence and in the case of palliative reaction only for the length and frequency of absence. In contrast, an avoidant coping style, representing a reactive–passive strategy, increases the likelihood of frequent absences significantly, as well as the length and duration of sickness absence. Expression of emotions, representing another reactive–passive strategy, has no effect on future sickness absenteeism. The median time before the onset of a new episode of absenteeism is significantly extended for active problem-solving and reduced for avoidance and for a palliative response. The results of the present study support the notion that problem-solving coping and reactive–passive strategies are inextricably connected to frequency, duration, length and onset of sickness absence. Especially, active problem-solving decreases the chance of future sickness absence. PMID:17701200

  20. Coping and sickness absence.

    PubMed

    van Rhenen, Willem; Schaufeli, Wilmar B; van Dijk, Frank J H; Blonk, Roland W B

    2008-02-01

    The aim of this study is to examine the role of coping styles in sickness absence. In line with findings that contrast the reactive-passive focused strategies, problem-solving strategies are generally associated with positive results in terms of well-being and overall health outcomes; our hypothesis is that such strategies are positively related to a low frequency of sickness absence and with short lengths (total number of days absent) and durations (mean duration per spell). Using a prospective design, employees' (N = 3,628) responses on a self-report coping inventory are used to predict future registered sickness absence (i.e. frequency, length, duration, and median time before the onset of a new sick leave period). In accordance with our hypothesis, and after adjustment for potential confounders, employees with an active problem-solving coping strategy are less likely to drop out because of sickness absence in terms of frequency, length (longer than 14 days), and duration (more than 7 days) of sickness absence. This positive effect is observed in the case of seeking social support only for the duration of sickness absence and in the case of palliative reaction only for the length and frequency of absence. In contrast, an avoidant coping style, representing a reactive-passive strategy, increases the likelihood of frequent absences significantly, as well as the length and duration of sickness absence. Expression of emotions, representing another reactive-passive strategy, has no effect on future sickness absenteeism. The median time before the onset of a new episode of absenteeism is significantly extended for active problem-solving and reduced for avoidance and for a palliative response. The results of the present study support the notion that problem-solving coping and reactive-passive strategies are inextricably connected to frequency, duration, length and onset of sickness absence. Especially, active problem-solving decreases the chance of future sickness absence.

  1. Critical transition in the constrained traveling salesman problem.

    PubMed

    Andrecut, M; Ali, M K

    2001-04-01

    We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.

  2. Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Matsaini; Santosa, Budi

    2018-04-01

    Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.

  3. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  4. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  5. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  6. Trends and perioperative outcomes for laparoscopic and robotic nephrectomy using the National Surgical Quality Improvement Program (NSQIP) database.

    PubMed

    Liu, Jen-Jane; Leppert, John T; Maxwell, Bryan G; Panousis, Periklis; Chung, Benjamin I

    2014-05-01

    We sought to examine the trends in perioperative outcomes of kidney cancer surgery stratified by type (radical nephrectomy [RN] vs. partial nephrectomy [PN]) and approach (open vs. minimally invasive). We queried the National Surgical Quality Improvement Program database to identify kidney cancer operations performed from 2005 to 2011. We examined 30-day perioperative outcomes including operative time, transfusion rate, length of stay, major morbidity (cardiovascular, pulmonary, renal, and infectious), and mortality. A total of 2,902 PN and 5,459 RN cases were identified. The use of PN increased over time, accounting for 39% of all nephrectomies in 2011. Minimally invasive approaches also increased over time for both RN and PN. Open surgery was associated with increased length of stay, receipt of transfusion, major complications, and perioperative mortality. Resident involvement and open approach were independent predictors of major complications for both PN and RN. Additionally, the presence of a medical comorbidity was also a risk factor for complications after RN. The overall complication rates decreased for all approaches over the study period. Minimally invasive approaches to kidney cancer renal surgery have increased with favorable outcomes. The safety of open and minimally invasive PN improved significantly over the study period. Although pathologic features cannot be determined from this data set, these data show that complications from renal surgical procedures are decreasing in an era of increasing use. © 2013 Published by Elsevier Inc.

  7. Increased span length for the MGS long-span guardrail system.

    DOT National Transportation Integrated Search

    2014-12-01

    Long-span guardrail systems have been recognized as an effective means of shielding low-fill culverts while : minimizing construction efforts and limiting culvert damage and repair. The current MGS long-span design provided the : capability to span u...

  8. Exact and Heuristic Minimization of the Average Path Length in Decision Diagrams

    DTIC Science & Technology

    2005-01-01

    34$&%’ (*) &+#-,./&%1023 ’+/4%! 5637& 158+#&9 1 SHINOBU NAGAYAMA∗ , ALAN ...reviewers for constructive comments. REFERENCES [1] Ashar , P. and Malik, S. (1995). Fast functional simulation using branching programs, ICCAD’95, 408–412. [2

  9. Parenting, corpus callosum, and executive function in preschool children.

    PubMed

    Kok, Rianne; Lucassen, Nicole; Bakermans-Kranenburg, Marian J; van IJzendoorn, Marinus H; Ghassabian, Akhgar; Roza, Sabine J; Govaert, Paul; Jaddoe, Vincent W; Hofman, Albert; Verhulst, Frank C; Tiemeier, Henning

    2014-01-01

    In this longitudinal population-based study (N = 544), we investigated whether early parenting and corpus callosum length predict child executive function abilities at 4 years of age. The length of the corpus callosum in infancy was measured using postnatal cranial ultrasounds at 6 weeks of age. At 3 years, two aspects of parenting were observed: maternal sensitivity during a teaching task and maternal discipline style during a discipline task. Parents rated executive function problems at 4 years of age in five domains of inhibition, shifting, emotional control, working memory, and planning/organizing, using the Behavior Rating Inventory of Executive Function-Preschool Version. Maternal sensitivity predicted less executive function problems at preschool age. A significant interaction was found between corpus callosum length in infancy and maternal use of positive discipline to determine child inhibition problems: The association between a relatively shorter corpus callosum in infancy and child inhibition problems was reduced in children who experienced more positive discipline. Our results point to the buffering potential of positive parenting for children with biological vulnerability.

  10. Review: Rusticle Formation on the RMS Titanic and the Potential Influence of Oceanography

    NASA Astrophysics Data System (ADS)

    Salazar, Maxsimo; Little, Brenda

    2017-04-01

    Meter length iron-rich rusticles on the RMS Titanic contain bacteria that reportedly mobilize iron from the ship structure at a rate that will reduce the wreck to rust in decades. Other sunken ships, such as the World War II shipwrecks in the Gulf of Mexico (GOM) are also similarly covered. However, at the GOM sites, rusticles are only centimeters in length. Minimal differences in water temperature (a few °C) between the two sites and comparable exposure times from wreckage to discovery cannot rationalize the extreme differences in rusticle length. One possible explanation for the observed difference in rusticle size is the differing amounts of dissolved or colloidal iron at the two locations.

  11. Percutaneous epiphysiodesis using transphyseal screws (PETS): prospective case study and review.

    PubMed

    Nouth, Fred; Kuo, Leonard A

    2004-01-01

    Percutaneous epiphysiodesis using transphyseal screws (PETS) is a relatively new procedure being used for the correction of moderate leg length discrepancy and angular deformities in children. Over a mean follow-up of 2.4 years the authors followed prospectively 18 patients who underwent PETS. Nine had correction of angular deformity and nine had leg length inequality. The average reduction in leg length discrepancy was from 3.33 to 1.36 cm. The average improvement in angular deformity was 69%. This quick, minimally invasive, and potentially reversible procedure has the added benefits of a short hospital stay with low morbidity, making it a suitable alternative to the more traditional methods of epiphysiodesis.

  12. Do Branch Lengths Help to Locate a Tree in a Phylogenetic Network?

    PubMed

    Gambette, Philippe; van Iersel, Leo; Kelk, Steven; Pardi, Fabio; Scornavacca, Celine

    2016-09-01

    Phylogenetic networks are increasingly used in evolutionary biology to represent the history of species that have undergone reticulate events such as horizontal gene transfer, hybrid speciation and recombination. One of the most fundamental questions that arise in this context is whether the evolution of a gene with one copy in all species can be explained by a given network. In mathematical terms, this is often translated in the following way: is a given phylogenetic tree contained in a given phylogenetic network? Recently this tree containment problem has been widely investigated from a computational perspective, but most studies have only focused on the topology of the phylogenies, ignoring a piece of information that, in the case of phylogenetic trees, is routinely inferred by evolutionary analyses: branch lengths. These measure the amount of change (e.g., nucleotide substitutions) that has occurred along each branch of the phylogeny. Here, we study a number of versions of the tree containment problem that explicitly account for branch lengths. We show that, although length information has the potential to locate more precisely a tree within a network, the problem is computationally hard in its most general form. On a positive note, for a number of special cases of biological relevance, we provide algorithms that solve this problem efficiently. This includes the case of networks of limited complexity, for which it is possible to recover, among the trees contained by the network with the same topology as the input tree, the closest one in terms of branch lengths.

  13. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  14. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  15. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  16. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  17. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  18. Minimax confidence intervals in geomagnetism

    NASA Technical Reports Server (NTRS)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  19. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  20. Problems in the Study of lineaments

    NASA Astrophysics Data System (ADS)

    Anokhin, Vladimir; Kholmyanskii, Michael

    2015-04-01

    The study of linear objects in upper crust, called lineaments, led at one time to a major scientific results - discovery of the planetary regmatic network, the birth of some new tectonic concepts, establishment of new search for signs of mineral deposits. But now lineaments studied not enough for such a promising research direction. Lineament geomorphology has a number of problems. 1.Terminology problems. Lineament theme still has no generally accepted terminology base. Different scientists have different interpretations even for the definition of lineament We offer an expanded definition for it: lineaments - line features of the earth's crust, expressed by linear landforms, geological linear forms, linear anomalies of physical fields may follow each other, associated with faults. The term "lineament" is not identical to the term "fault", but always lineament - reasonable suspicion to fault, and this suspicion is justified in most cases. The structure lineament may include only the objects that are at least presumably can be attributed to the deep processes. Specialists in the lineament theme can overcome terminological problems if together create a common terminology database. 2. Methodological problems. Procedure manual selection lineaments mainly is depiction of straight line segments along the axes of linear morphostructures on some cartographic basis. Reduce the subjective factors of manual selection is possible, following a few simple rules: - The choice of optimal projection, scale and quality of cartographic basis; - Selection of the optimal type of linear objects under study; - The establishment of boundary conditions for the allocation lineament (minimum length, maximum bending, the minimum length to width ratio, etc.); - Allocation of an increasing number of lineaments - for representative sampling and reduce the influence of random errors; - Ranking lineaments: fine lines (rank 3) combined to form larger lineaments rank 2; which, when combined capabilities in large lineaments rank 1; - Correlation of the resulting pattern of lineaments with a pattern already known of faults in the study area; - Separate allocation lineaments by several experts with correlation of the resulting schemes and create a common scheme. The problem of computer lineament allocation is not solved yet. Existing programs for lineament analysis is not so perfect to completely rely on them. In any of them, changing the initial parameters, we can get pictures lineaments any desired configuration. Also a high probability of heavy and hardly recognized systematic errors. In any case, computer lineament patterns after their creation should be subject to examination Real. 3. Interpretive problems. To minimize the distortion results of the lineament analysis is advisable to stick to a few techniques and rules: - use of visualization techniques, in particular, rose-charts, which are submitted azimuth and length of selected lineaments; - consistent downscaling of analysis. A preliminary analysis of a larger area that includes the area of interest with surroundings; - using the available information on the location of the already known faults and other tectonic linear objects of the study area; - comparison of the lineament scheme with schemes of other authors - can reduce the element of subjectivity in the schemes. The study of lineaments is a very promising direction of geomorfology and tectonics. Challenges facing the lineament theme, are solvable. To solve them, professionals should meet and talk to each other. The results of further work in this direction may exceed expectations.

  1. Three-disk microswimmer in a supported fluid membrane

    NASA Astrophysics Data System (ADS)

    Ota, Yui; Hosaka, Yuto; Yasuda, Kento; Komura, Shigeyuki

    2018-05-01

    A model of three-disk micromachine swimming in a quasi-two-dimensional supported membrane is proposed. We calculate the average swimming velocity as a function of the disk size and the arm length. Due to the presence of the hydrodynamic screening length in the quasi-two-dimensional fluid, the geometric factor appearing in the average velocity exhibits three different asymptotic behaviors depending on the microswimmer size and the hydrodynamic screening length. This is in sharp contrast with a microswimmer in a three-dimensional bulk fluid that shows only a single scaling behavior. We also find that the maximum velocity is obtained when the disks are equal-sized, whereas it is minimized when the average arm lengths are identical. The intrinsic drag of the disks on the substrate does not alter the scaling behaviors of the geometric factor.

  2. Geothermal Energy: Prospects and Problems

    ERIC Educational Resources Information Center

    Ritter, William W.

    1973-01-01

    An examination of geothermal energy as a means of increasing the United States power resources with minimal pollution problems. Developed and planned geothermal-electric power installations around the world, capacities, installation dates, etc., are reviewed. Environmental impact, problems, etc. are discussed. (LK)

  3. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  4. Scaphoid tuberosity excursion is minimized during a dart-throwing motion: A biomechanical study.

    PubMed

    Werner, Frederick W; Sutton, Levi G; Basu, Niladri; Short, Walter H; Moritomo, Hisao; St-Amand, Hugo

    2016-01-01

    The purpose of this study was to determine whether the excursion of the scaphoid tuberosity and therefore scaphoid motion is minimized during a dart-throwing motion. Scaphoid tuberosity excursion was studied as an indicator of scaphoid motion in 29 cadaver wrists as they were moved through wrist flexion-extension, radioulnar deviation, and a dart-throwing motion. Study results demonstrate that excursion was significantly less during the dart-throwing motion than during either wrist flexion-extension or radioulnar deviation. If the goal of early wrist motion after carpal ligament or distal radius injury and reconstruction is to minimize loading of the healing structures, a wrist motion in which scaphoid motion is minimal should reduce length changes in associated ligamentous structures. Therefore, during rehabilitation, if a patient uses a dart-throwing motion that minimizes his or her scaphoid tuberosity excursion, there should be minimal changes in ligament loading while still allowing wrist motion. Bench research, biomechanics, and cross-sectional. Not applicable. The study was laboratory based. Copyright © 2016 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  5. Relativized problems with abelian phase group in topological dynamics.

    PubMed

    McMahon, D

    1976-04-01

    Let (X, T) be the equicontinuous minimal transformation group with X = pi(infinity)Z(2), the Cantor group, and S = [unk](infinity)Z(2) endowed with the discrete topology acting on X by right multiplication. For any countable group T we construct a function F:X x S --> T such that if (Y, T) is a minimal transformation group, then (X x Y, S) is a minimal transformation group with the action defined by (x, y)s = [xs, yF(x, s)]. If (W, T) is a minimal transformation group and varphi:(Y, T) --> (W, T) is a homomorphism, then identity x varphi:(X x Y, S) --> (X x W, S) is a homomorphism and has many of the same properties that varphi has. For this reason, one may assume that the phase group is abelian (or S) without loss of generality for many relativized problems in topological dynamics.

  6. Limit behavior of mass critical Hartree minimization problems with steep potential wells

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Luo, Yong; Wang, Zhi-Qiang

    2018-06-01

    We consider minimizers of the following mass critical Hartree minimization problem: eλ(N ) ≔inf {u ∈H1(Rd ) , ‖u‖2 2=N } Eλ(u ) , where d ≥ 3, λ > 0, and the Hartree energy functional Eλ(u) is defined by Eλ(u ) ≔∫Rd|∇u (x ) |2d x +λ ∫Rdg (x ) u2(x ) d x -1/2 ∫Rd∫Rdu/2(x ) u2(y ) |x -y |2 d x d y . Here the steep potential g(x) satisfies 0 =g (0 ) =infRdg (x ) ≤g (x ) ≤1 and 1 -g (x ) ∈Ld/2(Rd ) . We prove that there exists a constant N* > 0, independent of λg(x), such that if N ≥ N*, then eλ(N) does not admit minimizers for any λ > 0; if 0 < N < N*, then there exists a constant λ*(N) > 0 such that eλ(N) admits minimizers for any λ > λ*(N) and eλ(N) does not admit minimizers for 0 < λ < λ*(N). For any given 0 < N < N*, the limit behavior of positive minimizers for eλ(N) is also studied as λ → ∞, where the mass concentrates at the bottom of g(x).

  7. Minimization of Dependency Length in Written English

    ERIC Educational Resources Information Center

    Temperley, David

    2007-01-01

    Gibson's Dependency Locality Theory (DLT) [Gibson, E. 1998. "Linguistic complexity: locality of syntactic dependencies." "Cognition," 68, 1-76; Gibson, E. 2000. "The dependency locality theory: A distance-based theory of linguistic complexity." In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), "Image,…

  8. Trading Spaces

    ERIC Educational Resources Information Center

    Cort, Cliff

    2006-01-01

    Education administrators face the dual dilemma of crowded, aging facilities and tightening capital budgets. The challenge is to build the necessary classroom, laboratory and activity space while minimizing the length and expense of the construction process. One solution that offers an affordable alternative is modular construction, a method that…

  9. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  10. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  11. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  12. On estimation of secret message length in LSB steganography in spatial domain

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2004-06-01

    In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.

  13. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  14. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  15. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  16. Flattening the inflaton potential beyond minimal gravity

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Min

    2018-01-01

    We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  17. Minimally conscious state or cortically mediated state?

    PubMed

    Naccache, Lionel

    2018-04-01

    Durable impairments of consciousness are currently classified in three main neurological categories: comatose state, vegetative state (also recently coined unresponsive wakefulness syndrome) and minimally conscious state. While the introduction of minimally conscious state, in 2002, was a major progress to help clinicians recognize complex non-reflexive behaviours in the absence of functional communication, it raises several problems. The most important issue related to minimally conscious state lies in its criteria: while behavioural definition of minimally conscious state lacks any direct evidence of patient's conscious content or conscious state, it includes the adjective 'conscious'. I discuss this major problem in this review and propose a novel interpretation of minimally conscious state: its criteria do not inform us about the potential residual consciousness of patients, but they do inform us with certainty about the presence of a cortically mediated state. Based on this constructive criticism review, I suggest three proposals aiming at improving the way we describe the subjective and cognitive state of non-communicating patients. In particular, I present a tentative new classification of impairments of consciousness that combines behavioural evidence with functional brain imaging data, in order to probe directly and univocally residual conscious processes.

  18. Design and optimal control of multi-spacecraft interferometric imaging systems

    NASA Astrophysics Data System (ADS)

    Chakravorty, Suman

    The objective of the proposed NASA Origins mission, Planet Imager, is the high-resolution imaging of exo-solar planets and similar high resolution astronomical imaging applications. The imaging is to be accomplished through the design of multi-spacecraft interferometric imaging systems (MSIIS). In this dissertation, we study the design of MSIIS. Assuming that the ultimate goal of imaging is the correct classification of the formed images, we formulate the design problem as minimization of some resource utilization of the system subject to the constraint that the probability of misclassification of any given image is below a pre-specified level. We model the process of image formation in an MSIIS and show that the Modulation Transfer function of and the noise corrupting the synthesized optical instrument are dependent on the trajectories of the constituent spacecraft. Assuming that the final goal of imaging is the correct classification of the formed image based on a given feature (a real valued function of the image variable), and a threshold on the feature, we find conditions on the noise corrupting the measurements such that the probability of misclassification is below some pre-specified level. These conditions translate into constraints on the trajectories of the constituent spacecraft. Thus, the design problem reduces to minimizing some resource utilization of the system, while satisfying the constraints placed on the system by the imaging requirements. We study the problem of designing minimum time maneuvers for MSIIS. We transform the time minimization problem into a "painting problem". The painting problem involves painting a large disk with smaller paintbrushes (coverage disks). We show that spirals form the dominant set for the solution to the painting problem. We frame the time minimization in the subspace of spirals and obtain a bilinear program, the double pantograph problem, in the design parameters of the spiral, the spiraling rate and the angular rate. We show that the solution of this problem is given by the solution to two associated linear programs. We illustrate our results through a simulation where the banded appearance of a fictitious exo-solar planet at a distance of 8 parsecs is detected.

  19. On the Local Convergence of Pattern Search

    NASA Technical Reports Server (NTRS)

    Dolan, Elizabeth D.; Lewis, Robert Michael; Torczon, Virginia; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We examine the local convergence properties of pattern search methods, complementing the previously established global convergence properties for this class of algorithms. We show that the step-length control parameter which appears in the definition of pattern search algorithms provides a reliable asymptotic measure of first-order stationarity. This gives an analytical justification for a traditional stopping criterion for pattern search methods. Using this measure of first-order stationarity, we analyze the behavior of pattern search in the neighborhood of an isolated local minimizer. We show that a recognizable subsequence converges r-linearly to the minimizer.

  20. Operation of a wet near-field scanning optical microscope in stable zones by minimizing the resonance change of tuning forks.

    PubMed

    Park, Kyoung-Duck; Park, Doo Jae; Lee, Seung Gol; Choi, Geunchang; Kim, Dai-Sik; Byeon, Clare Chisu; Choi, Soo Bong; Jeong, Mun Seok

    2014-02-21

    A resonant shift and a decrease of resonance quality of a tuning fork attached to a conventional fiber optic probe in the vicinity of liquid is monitored systematically while varying the protrusion length and immersion depth of the probe. Stable zones where the resonance modification as a function of immersion depth is minimized are observed. A wet near-field scanning optical microscope (wet-NSOM) is operated for a sample within water by using such a stable zone.

  1. Impact of minimally invasive surgery on healthcare utilization, cost, and workplace absenteeism in patients with Incisional/Ventral Hernia (IVH).

    PubMed

    Mikami, Dean J; Melvin, W Scott; Murayama, Michael J; Murayama, Kenric M

    2017-11-01

    Incisional hernia repair is one of the most common general surgery operations being performed today. With the advancement of laparoscopy since the 1990s, we have seen vast improvements in faster return to normal activity, shorter hospital stays and less post-operative narcotic use, to name a few. The key aims of this review were to measure the impact of minimally invasive surgery versus open surgery on health care utilization, cost, and work place absenteeism in the patients undergoing inpatient incisional/ventral hernia (IVH) repair. We analyzed data from the Truven Health Analytics MarketScan ® Commercial Claims and Encounters Database. Total of 2557 patients were included in the analysis. Of the patient that underwent IVH surgery, 24.5% (n = 626) were done utilizing minimally invasive surgical (MIS) techniques and 75.5% (n = 1931) were done open. Ninety-day post-surgery outcomes were significantly lower in the MIS group compared to the open group for total payment ($19,288.97 vs. $21,708.12), inpatient length of stay (3.12 vs. 4.24 days), number of outpatient visit (5.48 vs. 7.35), and estimated days off (11.3 vs. 14.64), respectively. At 365 days post-surgery, the total payment ($27,497.96 vs. $30,157.29), inpatient length of stay (3.70 vs. 5.04 days), outpatient visits (19.75 vs. 23.42), and estimated days off (35.71 vs. 41.58) were significantly lower for MIS group versus the open group, respectively. When surgical repair of IVH is performed, there is a clear advantage in the MIS approach versus the open approach in regard to cost, length of stay, number of outpatient visits, and estimated days off.

  2. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  3. Symmetrical kinematics does not imply symmetrical kinetics in people with transtibial amputation using cycling model.

    PubMed

    Childers, W Lee; Kogler, Géza F

    2014-01-01

    People with amputation move asymmetrically with regard to kinematics (joint angles) and kinetics (joint forces and moments). Clinicians have traditionally sought to minimize kinematic asymmetries, assuming kinetic asymmetries would also be minimized. A cycling model evaluated locomotor asymmetries. Eight individuals with unilateral transtibial amputation pedaled with 172 mm-length crank arms on both sides (control condition) and with the crank arm length shortened to 162 mm on the amputated side (CRANK condition). Pedaling kinetics and limb kinematics were recorded. Joint kinetics, joint angles (mean and range of motion [ROM]), and pedaling asymmetries were calculated from force pedals and with a motion capture system. A one-way analysis of variance with tukey post hoc compared kinetics and kinematics across limbs. Statistical significance was set to p

  4. What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective

    ERIC Educational Resources Information Center

    Wareham, Todd; Evans, Patricia; van Rooij, Iris

    2011-01-01

    Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…

  5. Information Retrieval Performance of Probabilistically Generated, Problem-Specific Computerized Provider Order Entry Pick-Lists: A Pilot Study

    PubMed Central

    Rothschild, Adam S.; Lehmann, Harold P.

    2005-01-01

    Objective: The aim of this study was to preliminarily determine the feasibility of probabilistically generating problem-specific computerized provider order entry (CPOE) pick-lists from a database of explicitly linked orders and problems from actual clinical cases. Design: In a pilot retrospective validation, physicians reviewed internal medicine cases consisting of the admission history and physical examination and orders placed using CPOE during the first 24 hours after admission. They created coded problem lists and linked orders from individual cases to the problem for which they were most indicated. Problem-specific order pick-lists were generated by including a given order in a pick-list if the probability of linkage of order and problem (PLOP) equaled or exceeded a specified threshold. PLOP for a given linked order-problem pair was computed as its prevalence among the other cases in the experiment with the given problem. The orders that the reviewer linked to a given problem instance served as the reference standard to evaluate its system-generated pick-list. Measurements: Recall, precision, and length of the pick-lists. Results: Average recall reached a maximum of .67 with a precision of .17 and pick-list length of 31.22 at a PLOP threshold of 0. Average precision reached a maximum of .73 with a recall of .09 and pick-list length of .42 at a PLOP threshold of .9. Recall varied inversely with precision in classic information retrieval behavior. Conclusion: We preliminarily conclude that it is feasible to generate problem-specific CPOE pick-lists probabilistically from a database of explicitly linked orders and problems. Further research is necessary to determine the usefulness of this approach in real-world settings. PMID:15684134

  6. Subsite mapping of enzymes. Depolymerase computer modelling.

    PubMed Central

    Allen, J D; Thoma, J A

    1976-01-01

    We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629

  7. Effects of drilling variables on burr properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillespie, L.K.

    1976-09-01

    An investigation utilizing 303Se stainless steel, 17-4PH stainless steel, 1018 steel, and 6061-T6 aluminum was conducted to determine the influence of drilling variables in controlling burr size to minimize burr-removal cost and improve the quality and reliability of parts for small precision mechanisms. Burr thickness can be minimized by reducing feedrate and cutting velocity, and by using drills having high helix angles. High helix angles reduce burr thickness, length, and radius, while most other variables reduce only one of these properties. Radial-lip drills minimize burrs from 303Se stainless steel when large numbers of holes are drilled; this material stretches 10more » percent before drill-breakthrough. Entrance burrs can be minimized by the use of subland drills at a greatly increased tool cost. Backup-rods used in cross-drilled holes may be difficult to remove and may scratch the hole walls.« less

  8. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  9. Hierarchical Material Properties in Finite Element Analysis: The Oilfield Infrastructure Problem.

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Wilson, G. A.

    2017-12-01

    Geophysical simulation of low-frequency electromagnetic signals within built environments such as urban centers and industrial landscapes facilities is a challenging computational problem because strong conductors (e.g., pipes, fences, rail lines, rebar, etc.) are not only highly conductive and/or magnetic relative to the surrounding geology, but they are very small in one or more of their physical length coordinates. Realistic modeling of such structures as idealized conductors has long been the standard approach; however this strategy carries with it computational burdens such as cumbersome implementation of internal boundary conditions, and limited flexibility for accommodating realistic geometries. Another standard approach is "brute force" discretization (often coupled with an equivalent medium model) whereby 100's of millions of voxels are used to represent these strong conductors, but at the cost of extreme computation times (and mesh design) for a simulation result when possible. To minimize these burdens, a new finite element scheme (Weiss, Geophysics, 2017) has been developed in which the material properties reside on a hierarchy of geometric simplicies (i.e., edges, facets and volumes) within an unstructured tetrahedral mesh. This allows thin sheet—like structures, such as subsurface fractures, to be economically represented by a connected set of triangular facets, for example, that freely conform to arbitrary "real world" geometries. The same holds thin pipe/wire-like structures, such as casings or pipelines. The hierarchical finite element scheme has been applied to problems in electro- and magnetostatics for oilfield problems where the elevated, but finite, conductivity and permeability of the steel-cased oil wells must be properly accounted for, yielding results that are otherwise unobtainable, with run times as low as a few 10s of seconds. Extension of the hierarchical finite element concept to broadband electromagnetics is presently underway, as are its implications for geophysical inversion.

  10. The impact of length of placement on self-reported mental health problems in detained Jordanian youth.

    PubMed

    Schwalbe, Craig S; Gearing, Robin E; Mackenzie, Michael J; Brewer, Kathryne B; Ibrahim, Rawan W

    2013-01-01

    This study reports the prevalence of emotional and behavioral problems among youths placed in juvenile correctional facilities in Jordan and describes the effect of length of stay on mental health outcomes. The Youth Self Report (YSR) was administered to 187 adolescent males (mean age=16.4, SD=1.0) in all five juvenile detention facilities in Jordan in 2011. Descriptive statistics were calculated to estimate the prevalence of emotional and behavioral problems. Logistic regression models were estimated to evaluate the impact of placement length on mental health. Statistical models were weighted by the youth propensity to be 'long-stay' youths (>23 weeks) based on preplacement case characteristics. The prevalence of clinically significant emotional and behavioral problems was 84%. 46% had YSR scores above the clinical cutpoint in both the internalizing and externalizing subscales. 24% of youths reported suicidal ideation. The high prevalence of emotional and behavioral disorders was stable across placement for most YSR subscales. The prevalence of emotional and behavioral disorders among detained and incarcerated youth in Jordan mirrors the literature worldwide. These findings suggest that serious mental health problems for many youths persist throughout placement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  12. Graphical approach for multiple values logic minimization

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  13. Retrospective cohort study of an enhanced recovery programme in oesophageal and gastric cancer surgery

    PubMed Central

    Gatenby, PAC; Shaw, C; Hine, C; Scholtes, S; Koutra, M; Andrew, H; Hacking, M; Allum, WH

    2015-01-01

    Introduction Enhanced recovery programmes have been established in some areas of elective surgery. This study applied enhanced recovery principles to elective oesophageal and gastric cancer surgery. Methods An enhanced recovery programme for patients undergoing open oesophagogastrectomy, total and subtotal gastrectomy for oesophageal and gastric malignancy was designed. A retrospective cohort study compared length of stay on the critical care unit (CCU), total length of inpatient stay, rates of complications and in-hospital mortality prior to (35 patients) and following (27 patients) implementation. Results In the cohort study, the median total length of stay was reduced by 3 days following oesophagogastrectomy and total gastrectomy. The median length of stay on the CCU remained the same for all patients. The rates of complications and mortality were the same. Conclusions The standardised protocol reduced the median overall length of stay but did not reduce CCU stay. Enhanced recovery principles can be applied to patients undergoing major oesophagogastrectomy and total gastrectomy as long as they have minimal or reversible co-morbidity. PMID:26414360

  14. Optimization of injection molding process parameters for a plastic cell phone housing component

    NASA Astrophysics Data System (ADS)

    Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya

    2016-11-01

    To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.

  15. Minimization of bacterial size allows for complement evasion and is overcome by the agglutinating effect of antibody

    PubMed Central

    Dalia, Ankur B.; Weiser, Jeffrey N.

    2011-01-01

    SUMMARY The complement system, which functions by lysing pathogens directly or by promoting their uptake by phagocytes, is critical for controlling many microbial infections. Here we show that in Streptococcus pneumoniae, increasing bacterial chain length sensitizes this pathogen to complement deposition and subsequent uptake by human neutrophils. Consistent with this, we show that minimizing chain length provides wild-type bacteria with a competitive advantage in vivo in a model of systemic infection. Investigating how the host overcomes this virulence strategy, we find that antibody promotes complement-dependent opsonophagocytic killing of Streptococcus pneumoniae and lysis of Haemophilus influenzae independent of Fc-mediated effector functions. Consistent with the agglutinating effect of antibody, F(ab′)2 but not Fab could promote this effect. Therefore, increasing pathogen size, whether by natural changes in cellular morphology or via antibody-mediated agglutination, promotes complement-dependent killing. These observations have broad implications for how cell size and morphology can affect virulence among pathogenic microbes. PMID:22100164

  16. Collective intelligence for control of distributed dynamical systems

    NASA Astrophysics Data System (ADS)

    Wolpert, D. H.; Wheeler, K. R.; Tumer, K.

    2000-03-01

    We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, The American Economic Review, 84 (1994) 406; D. Challet and Y. C. Zhang, Physica A, 256 (1998) 514). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not "work at cross purposes", in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.

  17. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  18. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  19. Does the intercept of the heat-stress relation provide an accurate estimate of cardiac activation heat?

    PubMed

    Pham, Toan; Tran, Kenneth; Mellor, Kimberley M; Hickey, Anthony; Power, Amelia; Ward, Marie-Louise; Taberner, Andrew; Han, June-Chiew; Loiselle, Denis

    2017-07-15

    The heat of activation of cardiac muscle reflects the metabolic cost of restoring ionic homeostasis following a contraction. The accuracy of its measurement depends critically on the abolition of crossbridge cycling. We abolished crossbridge activity in isolated rat ventricular trabeculae by use of blebbistatin, an agent that selectively inhibits myosin II ATPase. We found cardiac activation heat to be muscle length independent and to account for 15-20% of total heat production at body temperature. We conclude that it can be accurately estimated at minimal muscle length. Activation heat arises from two sources during the contraction of striated muscle. It reflects the metabolic expenditure associated with Ca 2+ pumping by the sarcoplasmic reticular Ca 2+ -ATPase and Ca 2+ translocation by the Na + /Ca 2+ exchanger coupled to the Na + ,K + -ATPase. In cardiac preparations, investigators are constrained in estimating its magnitude by reducing muscle length to the point where macroscopic twitch force vanishes. But this experimental protocol has been criticised since, at zero force, the observed heat may be contaminated by residual crossbridge cycling activity. To eliminate this concern, the putative thermal contribution from crossbridge cycling activity must be abolished, at least at minimal muscle length. We achieved this using blebbistatin, a selective inhibitor of myosin II ATPase. Using a microcalorimeter, we measured the force production and heat output, as functions of muscle length, of isolated rat trabeculae from both ventricles contracting isometrically at 5 Hz and at 37°C. In the presence of blebbistatin (15 μmol l -1 ), active force was zero but heat output remained constant, at all muscle lengths. Activation heat measured in the presence of blebbistatin was not different from that estimated from the intercept of the heat-stress relation in its absence. We thus reached two conclusions. First, activation heat is independent of muscle length. Second, residual crossbridge heat is negligible at zero active force; hence, the intercept of the cardiac heat-force relation provides an estimate of activation heat uncontaminated by crossbridge cycling. Both results resolve long-standing disputes in the literature. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.

  20. Optimal UAS Assignments and Trajectories for Persistent Surveillance and Data Collection from a Wireless Sensor Network

    DTIC Science & Technology

    2015-12-24

    minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution

  1. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  2. Minimal models of compact symplectic semitoric manifolds

    NASA Astrophysics Data System (ADS)

    Kane, D. M.; Palmer, J.; Pelayo, Á.

    2018-02-01

    A symplectic semitoric manifold is a symplectic 4-manifold endowed with a Hamiltonian (S1 × R) -action satisfying certain conditions. The goal of this paper is to construct a new symplectic invariant of symplectic semitoric manifolds, the helix, and give applications. The helix is a symplectic analogue of the fan of a nonsingular complete toric variety in algebraic geometry, that takes into account the effects of the monodromy near focus-focus singularities. We give two applications of the helix: first, we use it to give a classification of the minimal models of symplectic semitoric manifolds, where "minimal" is in the sense of not admitting any blowdowns. The second application is an extension to the compact case of a well known result of Vũ Ngọc about the constraints posed on a symplectic semitoric manifold by the existence of focus-focus singularities. The helix permits to translate a symplectic geometric problem into an algebraic problem, and the paper describes a method to solve this type of algebraic problem.

  3. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  4. Free-energy minimization and the dark-room problem.

    PubMed

    Friston, Karl; Thornton, Christopher; Clark, Andy

    2012-01-01

    Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the "free-energy minimization" formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b - see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the "Dark-Room Problem." Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington's Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).

  5. Time versus energy minimization migration strategy varies with body size and season in long-distance migratory shorebirds.

    PubMed

    Zhao, Meijuan; Christie, Maureen; Coleman, Jonathan; Hassell, Chris; Gosbell, Ken; Lisovski, Simeon; Minton, Clive; Klaassen, Marcel

    2017-01-01

    Migrants have been hypothesised to use different migration strategies between seasons: a time-minimization strategy during their pre-breeding migration towards the breeding grounds and an energy-minimization strategy during their post-breeding migration towards the wintering grounds. Besides season, we propose body size as a key factor in shaping migratory behaviour. Specifically, given that body size is expected to correlate negatively with maximum migration speed and that large birds tend to use more time to complete their annual life-history events (such as moult, breeding and migration), we hypothesise that large-sized species are time stressed all year round. Consequently, large birds are not only likely to adopt a time-minimization strategy during pre-breeding migration, but also during post-breeding migration, to guarantee a timely arrival at both the non-breeding (i.e. wintering) and breeding grounds. We tested this idea using individual tracks across six long-distance migratory shorebird species (family Scolopacidae) along the East Asian-Australasian Flyway varying in size from 50 g to 750 g lean body mass. Migration performance was compared between pre- and post-breeding migration using four quantifiable migratory behaviours that serve to distinguish between a time- and energy-minimization strategy, including migration speed, number of staging sites, total migration distance and step length from one site to the next. During pre- and post-breeding migration, the shorebirds generally covered similar distances, but they tended to migrate faster, used fewer staging sites, and tended to use longer step lengths during pre-breeding migration. These seasonal differences are consistent with the prediction that a time-minimization strategy is used during pre-breeding migration, whereas an energy-minimization strategy is used during post-breeding migration. However, there was also a tendency for the seasonal difference in migration speed to progressively disappear with an increase in body size, supporting our hypothesis that larger species tend to use time-minimization strategies during both pre- and post-breeding migration. Our study highlights that body size plays an important role in shaping migratory behaviour. Larger migratory bird species are potentially time constrained during not only the pre- but also the post-breeding migration. Conservation of their habitats during both seasons may thus be crucial for averting further population declines.

  6. Airway compliance and dynamics explain the apparent discrepancy in length adaptation between intact airways and smooth muscle strips.

    PubMed

    Dowie, Jackson; Ansell, Thomas K; Noble, Peter B; Donovan, Graham M

    2016-01-01

    Length adaptation is a phenomenon observed in airway smooth muscle (ASM) wherein over time there is a shift in the length-tension curve. There is potential for length adaptation to play an important role in airway constriction and airway hyper-responsiveness in asthma. Recent results by Ansell et al., 2015 (JAP 2014 10.1152/japplphysiol.00724.2014) have cast doubt on this role by testing for length adaptation using an intact airway preparation, rather than strips of ASM. Using this technique they found no evidence for length adaptation in intact airways. Here we attempt to resolve this apparent discrepancy by constructing a minimal mathematical model of the intact airway, including ASM which follows the classic length-tension curve and undergoes length adaptation. This allows us to show that (1) no evidence of length adaptation should be expected in large, cartilaginous, intact airways; (2) even in highly compliant peripheral airways, or at more compliant regions of the pressure-volume curve of large airways, the effect of length adaptation would be modest and at best marginally detectable in intact airways; (3) the key parameters which control the appearance of length adaptation in intact airways are airway compliance and the relaxation timescale. The results of this mathematical simulation suggest that length adaptation observed at the level of the isolated ASM may not clearly manifest in the normal intact airway. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Therapeutic strategies for patients with micropenis or penile dysmorphic disorder.

    PubMed

    Kayes, Oliver; Shabbir, Majid; Ralph, David; Minhas, Suks

    2012-09-01

    Micropenis in adults is defined as a stretched length of <7.5 cm. Many aetiologies exist, including congenital and endocrinological causes as well as pathological conditions, such as penile lichen sclerosus, trauma and genital cancer. The resulting reduction in functional penile length can lead to considerable psychosexual morbidity. Furthermore, the subset of patients with micropenis who also suffer from penile dysmorphic disorder require careful and intensive psychological counselling. Corrective surgery for micropenis can be performed in patients with realistic expectations. Total phalloplasty using radial-artery-based forearm skin flaps can offer restoration of normal penile length in selected patients. More-conservative surgical techniques to improve length or girth are limited by minimal enhancement but associated with a significantly lower rate of complications and comorbidity compared to total phalloplasty. Emerging tissue engineering techniques might represent a suitable alternative to penile replacement surgery in the future.

  8. The Hand-Assisted Laparoscopic Approach to Resection of Pancreatic Mucinous Cystic Neoplasms: An Underused Technique?

    PubMed

    Postlewait, Lauren M; Ethun, Cecilia G; McInnis, Mia R; Merchant, Nipun; Parikh, Alexander; Idrees, Kamran; Isom, Chelsea A; Hawkins, William; Fields, Ryan C; Strand, Matthew; Weber, Sharon M; Cho, Clifford S; Salem, Ahmed; Martin, Robert C G; Scoggins, Charles; Bentrem, David; Kim, Hong J; Carr, Jacquelyn; Ahmad, Syed; Abbott, Daniel; Wilson, Gregory C; Kooby, David A; Maithel, Shishir K

    2018-01-01

    Pancreatic mucinous cystic neoplasms (MCNs) are rare tumors typically of the distal pancreas that harbor malignant potential. Although resection is recommended, data are limited on optimal operative approaches to distal pancreatectomy for MCN. MCN resections (2000-2014; eight institutions) were included. Outcomes of minimally invasive and open MCN resections were compared. A total of 289 patients underwent distal pancreatectomy for MCN: 136(47%) minimally invasive and 153(53%) open. Minimally invasive procedures were associated with smaller MCN size (3.9 vs 6.8 cm; P = 0.001), lower operative blood loss (192 vs 392 mL; P = 0.001), and shorter hospital stay(5 vs 7 days; P = 0.001) compared with open. Despite higher American Society of Anesthesiologists class, hand-assisted (n = 46) had similar advantages as laparoscopic/robotic (n = 76). When comparing hand-assisted to open, although MCN size was slightly smaller (4.1 vs 6.8 cm; P = 0.001), specimen length, operative time, and nodal yield were identical. Similar to laparoscopic/robotic, hand-assisted had lower operative blood loss (161 vs 392 mL; P = 0.001) and shorter hospital stay (5 vs 7 days; P = 0.03) compared with open, without increased complications. Hand-assisted laparoscopic technique is a useful approach for MCN resection because specimen length, lymph node yield, operative time, and complication profiles are similar to open procedures, but it still offers the advantages of a minimally invasive approach. Hand-assisted laparoscopy should be considered as an alternative to open technique or as a successive step before converting from total laparoscopic to open distal pancreatectomy for MCN.

  9. Variation in leader length of bitterbrush

    Treesearch

    Richard L. Hubbard; David. Dunaway

    1958-01-01

    The estimation of herbage production and· utilization in browse plants has been a problem for many years. Most range technicians have simply estimated the average length of twigs or leaders. then expressed use by deer and livestock as a percentage thereof based on the estimated average length left after grazing. Riordan used this method on mountain mahogany (

  10. Newton's Radii, Maupertuis' Arc Length, and Voltaire's Giant

    ERIC Educational Resources Information Center

    Simoson, Andrew J.

    2011-01-01

    Given two arc length measurements along the perimeter of an ellipse--one taken near the long diameter, the other taken anywhere else--how do you find the lengths of major and minor axes? This was a problem of great interest from the time of Newton's "Principia" until the mid-eighteenth century when France launched twin geodesic…

  11. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  12. 78 FR 40823 - Reports, Forms, and Record Keeping Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-08

    ... at time of approval. Title: National Survey of Principal Drivers of Vehicles with a Rear Seat Belt... from both groups and information on their passengers seat belt usage habits, as well as the... use computer-assisted telephone interviewing to reduce interview length and minimize recording errors...

  13. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  14. The modification of hybrid method of ant colony optimization, particle swarm optimization and 3-OPT algorithm in traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Hertono, G. F.; Ubadah; Handari, B. D.

    2018-03-01

    The traveling salesman problem (TSP) is a famous problem in finding the shortest tour to visit every vertex exactly once, except the first vertex, given a set of vertices. This paper discusses three modification methods to solve TSP by combining Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO) and 3-Opt Algorithm. The ACO is used to find the solution of TSP, in which the PSO is implemented to find the best value of parameters α and β that are used in ACO.In order to reduce the total of tour length from the feasible solution obtained by ACO, then the 3-Opt will be used. In the first modification, the 3-Opt is used to reduce the total tour length from the feasible solutions obtained at each iteration, meanwhile, as the second modification, 3-Opt is used to reduce the total tour length from the entire solution obtained at every iteration. In the third modification, 3-Opt is used to reduce the total tour length from different solutions obtained at each iteration. Results are tested using 6 benchmark problems taken from TSPLIB by calculating the relative error to the best known solution as well as the running time. Among those modifications, only the second and third modification give satisfactory results except the second one needs more execution time compare to the third modifications.

  15. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  16. What energy functions can be minimized via graph cuts?

    PubMed

    Kolmogorov, Vladimir; Zabih, Ramin

    2004-02-01

    In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.

  17. Middle School Students' Reasoning in Nonlinear Proportional Problems in Geometry

    ERIC Educational Resources Information Center

    Ayan, Rukiye; Isiksal Bostan, Mine

    2018-01-01

    In this study, we investigate sixth, seventh, and eighth grade students' achievement in nonlinear (quadratic or cubic) proportional problems regarding length, area, and volume of enlarged figures. In addition, we examine students' solution strategies for the problems and obstacles that prevent students from answering the problems correctly by…

  18. Use of Nintendo Wii Balance Board for posturographic analysis of Multiple Sclerosis patients with minimal balance impairment.

    PubMed

    Severini, Giacomo; Straudi, Sofia; Pavarelli, Claudia; Da Roit, Marco; Martinuzzi, Carlotta; Di Marco Pizzongolo, Laura; Basaglia, Nino

    2017-03-11

    The Wii Balance Board (WBB) has been proposed as an inexpensive alternative to laboratory-grade Force Plates (FP) for the instrumented assessment of balance. Previous studies have reported a good validity and reliability of the WBB for estimating the path length of the Center of Pressure. Here we extend this analysis to 18 balance related features extracted from healthy subjects (HS) and individuals affected by Multiple Sclerosis (MS) with minimal balance impairment. Eighteen MS patients with minimal balance impairment (Berg Balance Scale 53.3 ± 3.1) and 18 age-matched HS were recruited in this study. All subjects underwent instrumented balance tests on the FP and WBB consisting of quiet standing with the eyes open and closed. Linear correlation analysis and Bland-Altman plots were used to assess relations between path lengths estimated using the WBB and the FP. 18 features were extracted from the instrumented balance tests. Statistical analysis was used to assess significant differences between the features estimated using the WBB and the FP and between HS and MS. The Spearman correlation coefficient was used to evaluate the validity and the Intraclass Correlation Coefficient was used to assess the reliability of WBB measures with respect to the FP. Classifiers based on Support Vector Machines trained on the FP and WBB features were used to assess the ability of both devices to discriminate between HS and MS. We found a significant linear relation between the path lengths calculated from the WBB and the FP indicating an overestimation of these parameters in the WBB. We observed significant differences in the path lengths between FP and WBB in most conditions. However, significant differences were not found for the majority of the other features. We observed the same significant differences between the HS and MS populations across the two measurement systems. Validity and reliability were moderate-to-high for all the analyzed features. Both the FP and WBB trained classifier showed similar classification performance (>80%) when discriminating between HS and MS. Our results support the observation that the WBB, although not suitable for obtaining absolute measures, could be successfully used in comparative analysis of different populations.

  19. Slack length reduces the contractile phenotype of the Swine carotid artery.

    PubMed

    Rembold, Christopher M; Garvey, Sean M; Tejani, Ankit D

    2013-01-01

    Contraction is the primary function of adult arterial smooth muscle. However, in response to vessel injury or inflammation, arterial smooth muscle is able to phenotypically modulate from the contractile state to several 'synthetic' states characterized by proliferation, migration and/or increased cytokine secretion. We examined the effect of tissue length (L) on the phenotype of intact, isometrically held, initially contractile swine carotid artery tissues. Tissues were studied (1) without prolonged incubation at the optimal length for force generation (1.0 Lo, control), (2) with prolonged incubation for 17 h at 1.0 Lo, or (3) with prolonged incubation at slack length (0.6 Lo) for 16 h and then restoration to 1.0 Lo for 1 h. Prolonged incubation at 1.0 Lo minimally reduced the contractile force without substantially altering the mediators of contraction (crossbridge phosphorylation, shortening velocity or stimulated actin polymerization). Prolonged incubation of tissues at slack length (0.6 Lo), despite return of length to 1.0 Lo, substantially reduced contractile force, reduced crossbridge phosphorylation, nearly abolished crossbridge cycling (shortening velocity) and abolished stimulated actin polymerization. These data suggest that (1) slack length treatment significantly alters the contractile phenotype of arterial tissue, and (2) slack length treatment is a model to study acute phenotypic modulation of intact arterial smooth muscle. Copyright © 2013 S. Karger AG, Basel.

  20. An Adaptive Pheromone Updation of the Ant-System using LMS Technique

    NASA Astrophysics Data System (ADS)

    Paul, Abhishek; Mukhopadhyay, Sumitra

    2010-10-01

    We propose a modified model of pheromone updation for Ant-System, entitled as Adaptive Ant System (AAS), using the properties of basic Adaptive Filters. Here, we have exploited the properties of Least Mean Square (LMS) algorithm for the pheromone updation to find out the best minimum tour for the Travelling Salesman Problem (TSP). TSP library has been used for the selection of benchmark problem and the proposed AAS determines the minimum tour length for the problems containing large number of cities. Our algorithm shows effective results and gives least tour length in most of the cases as compared to other existing approaches.

  1. Effect of Causal Stories in Solving Mathematical Story Problems

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Gerretson, Helen; Olkun, Sinan; Joutsenlahti, Jorma

    2010-01-01

    This study investigated whether infusing "causal" story elements into mathematical word problems improves student performance. In one experiment in the USA and a second in USA, Finland and Turkey, undergraduate elementary education majors worked word problems in three formats: 1) standard (minimal verbiage), 2) potential causation…

  2. [The present and future state of minimized extracorporeal circulation].

    PubMed

    Meng, Fan; Yang, Ming

    2013-05-01

    Minimized extracorporeal circulation improved in the postoperative side effects of conventional extracorporeal circulation is a kind of new extracorporeal circulation. This paper introduces the principle, characteristics, applications and related research of minimized extracorporeal circulation. For the problems of systemic inflammatory response syndrome and limited assist time, the article proposes three development direction including system miniaturization and integration, pulsatile blood pump and the adaptive control by human parameter identification.

  3. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.

    PubMed

    Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen

    2017-09-04

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  4. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  5. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    PubMed

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  6. Geopolymer for protective coating of transportation infrastructures.

    DOT National Transportation Integrated Search

    1998-09-01

    Surface deterioration of exposed transportation structures is a major problem. In most cases, : surface deterioration could lead to structural problems because of the loss of cover and ensuing : reinforcement corrosion. To minimize the deterioration,...

  7. Continued research on selected parameters to minimize community annoyance from airplane noise

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1981-01-01

    Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.

  8. A review of the generalized uncertainty principle.

    PubMed

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-12-01

    Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.

  9. A bottom-up approach to the strong CP problem

    NASA Astrophysics Data System (ADS)

    Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.

    2018-05-01

    The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.

  10. Null Angular Momentum and Weak KAM Solutions of the Newtonian N-Body Problem

    NASA Astrophysics Data System (ADS)

    Percino-Figueroa, Boris A.

    2017-08-01

    In [Arch. Ration. Mech. Anal. 213 (2014), 981-991] it has been proved that in the Newtonian N-body problem, given a minimal central configuration a and an arbitrary configuration x, there exists a completely parabolic orbit starting on x and asymptotic to the homothetic parabolic motion of a, furthermore such an orbit is a free time minimizer of the action functional. In this article we extend this result in abundance of completely parabolic motions by proving that under the same hypothesis it is possible to get that the completely parabolic motion starting at x has zero angular momentum. We achieve this by characterizing the rotation invariant weak KAM solutions as those defining a lamination on the configuration space by free time minimizers with zero angular momentum.

  11. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  12. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  13. Safer Ski Jumps: Design of Landing Surfaces and Clothoidal In-Run Transitions

    DTIC Science & Technology

    2010-06-01

    MINIMIZATION ......................................................................................... 9 B. DETERMINATION OF SKIER VELOCITY AT TAKEOFF...Spiral Flatness, Clothoid Length, and Angle From Horizontal ..................... 68 c. Free Body Diagram of a Skier in Clothoidal Transition...1 Figure 2. Ski jump in Einsiedeln, Switzerland, from [5] ................................................ 2 Figure 3. A skier performing

  14. 24 CFR 983.254 - Vacancies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... vacancy (and notwithstanding the reasonable good faith efforts of the PHA to fill such vacancies), the PHA... on the PHA waiting list referred by the PHA. (3) The PHA and the owner must make reasonable good faith efforts to minimize the likelihood and length of any vacancy. (b) Reducing number of contract...

  15. How to Handle Drop-in Visitors.

    ERIC Educational Resources Information Center

    Partin, Ronald L.

    1988-01-01

    Although interruptions are an unavoidable part of the principal's job, a completely open-door policy for drop-in visitors could divert attention from planning and other priorities. This article suggests ways for principals to minimize the number of visitors and the length of visits, including keeping people standing, providing uncomfortable…

  16. 46 CFR 28.265 - Emergency instructions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Describe your vessel: (Insert length, color, hull type, trim, masts, power, and any additional... the vessel. (ii) Keep bilges dry to prevent loss of stability due to water in bilges. Use power driven... vessel to minimize the effect of wind on the fire. (vi) If unable to control the fire, immediately notify...

  17. Finite element procedures for time-dependent convection-diffusion-reaction systems

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Park, Y. J.; Deans, H. A.

    1988-01-01

    New finite element procedures based on the streamline-upwind/Petrov-Galerkin formulations are developed for time-dependent convection-diffusion-reaction equations. These procedures minimize spurious oscillations for convection-dominated and reaction-dominated problems. The results obtained for representative numerical examples are accurate with minimal oscillations. As a special application problem, the single-well chemical tracer test (a procedure for measuring oil remaining in a depleted field) is simulated numerically. The results show the importance of temperature effects on the interpreted value of residual oil saturation from such tests.

  18. Drag Minimization for Wings and Bodies in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Fuller, Franklyn B

    1958-01-01

    The minimization of inviscid fluid drag is studied for aerodynamic shapes satisfying the conditions of linearized theory, and subject to imposed constraints on lift, pitching moment, base area, or volume. The problem is transformed to one of determining two-dimensional potential flows satisfying either Laplace's or Poisson's equations with boundary values fixed by the imposed conditions. A general method for determining integral relations between perturbation velocity components is developed. This analysis is not restricted in application to optimum cases; it may be used for any supersonic wing problem.

  19. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  20. Suspended few-layer graphene beam electromechanical switch with abrupt on-off characteristics and minimal leakage current

    NASA Astrophysics Data System (ADS)

    Kim, Sung Min; Song, Emil B.; Lee, Sejoon; Seo, Sunae; Seo, David H.; Hwang, Yongha; Candler, R.; Wang, Kang L.

    2011-07-01

    Suspended few-layer graphene beam electro-mechanical switches (SGSs) with 0.15 μm air-gap are fabricated and electrically characterized. The SGS shows an abrupt on/off current characteristics with minimal off current. In conjunction with the narrow air-gap, the outstanding mechanical properties of graphene enable the mechanical switch to operate at a very low pull-in voltage (VPI) of 1.85 V, which is compatible with conventional complimentary metal-oxide-semiconductor (CMOS) circuit requirements. In addition, we show that the pull-in voltage exhibits an inverse dependence on the beam length.

  1. Evaluation of advanced lift concepts and potential fuel conservation for short-haul aircraft

    NASA Technical Reports Server (NTRS)

    Sweet, H. S.; Renshaw, J. H.; Bowden, M. K.

    1975-01-01

    The effect of different field lengths, cruise requirements, noise level, and engine cycle characteristics on minimizing fuel consumption and minimizing operating cost at high fuel prices were evaluated for some advanced short-haul aircraft. The conceptual aircraft were designed for 148 passengers using the upper surface-internally blown jet flap, the augmentor wing, and the mechanical flap lift systems. Advanced conceptual STOL engines were evaluated as well as a near-term turbofan and turboprop engine. Emphasis was given to designs meeting noise levels equivalent to 95-100 EPNdB at 152 m (500 ft) sideline.

  2. Review of robot-assisted partial nephrectomy in modern practice

    PubMed Central

    Weaver, John; Benway, Brian M.

    2015-01-01

    Partial nephrectomy (PN) is currently the standard treatment for T1 renal tumors. Minimally invasive PN offers decreased blood loss, shorter length of stay, rapid convalescence, and improved cosmesis. Due to the challenges inherent in laparoscopic partial nephrectomy, its dissemination has been stifled. Robot-assisted partial nephrectomy (RAPN) offers an intuitive platform to perform minimally invasive PN. It is one of the fastest growing robotic procedures among all surgical subspecialties. RAPN continues to improve upon the oncological and functional outcomes of renal tumor extirpative therapy. Herein, we describe the surgical technique, outcomes, and complications of RAPN. PMID:28326257

  3. Local Risk-Minimization for Defaultable Claims with Recovery Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  4. Length of stay after vaginal birth: sociodemographic and readiness-for-discharge factors.

    PubMed

    Weiss, Marianne; Ryan, Polly; Lokken, Lisa; Nelson, Magdalen

    2004-06-01

    The impact of reductions in postpartum length of stay have been widely reported, but factors influencing length of hospital stay after vaginal birth have received less attention. The study purpose was to compare the sociodemographic characteristics and readiness for discharge of new mothers and their newborns at 3 discharge time intervals, and to determine which variables were associated with postpartum length of stay. The study sample comprised 1,192 mothers who were discharged within 2 postpartum days after uncomplicated vaginal birth at a tertiary perinatal center in the midwestern United States. The sample was divided into 3 postpartum length-of-stay groups: group 1 (18-30 hr), group 2 (31-42 hr), and group 3 (43-54 hr). Sociodemographic and readiness-for-discharge data were collected by self-report and from a computerized hospital information system. Measures of readiness for discharge included perceived readiness (single item and Readiness for Discharge After Birth Scale), documented maternal and neonatal clinical problems, and feeding method. Compared with other groups, the longest length-of-stay group was older; of higher socioeconomic status and education; and with more primiparous, breastfeeding, white, married mothers who were living with the baby's father, had adequate home help, and had a private payor source. This group also reported greater readiness for discharge, but their newborns had more documented clinical problems during the postbirth hospitalization. In logistic regression modeling, earlier discharge was associated with young age, multiparity, public payor source, low socioeconomic status, lack of readiness for discharge, bottle-feeding, and absence of a neonatal clinical problem. Sociodemographic characteristics and readiness for discharge (clinical and perceived) were associated with length of postpartum hospital stay. Length of stay is an outcome of a complex interface between patient, provider, and payor influences on discharge timing that requires additional study. Including perceived readiness for discharge in clinical discharge criteria will add an important dimension to assessment of readiness for discharge after birth.

  5. Synthesis of positively charged hybrid PHMB-stabilized silver nanoparticles: the search for a new type of active substances used in plant protection products

    NASA Astrophysics Data System (ADS)

    Krutyakov, Yurii A.; Kudrinsky, Alexey A.; Gusev, Alexander A.; Zakharova, Olga V.; Klimov, Alexey I.; Yapryntsev, Alexey D.; Zherebin, Pavel M.; Shapoval, Olga A.; Lisichkin, Georgii V.

    2017-07-01

    Modern agriculture calls for a decrease in pesticide application, particularly in order to decrease the negative impact on the environment. Therefore the development of new active substances and plant protection products (PPP) to minimize the chemical load on ecosystems is a very important problem. Substances based on silver nanoparticles are a promising solution of this problem because of the fact that in correct doses such products significantly increase yields and decrease crop diseases while displaying low toxicity to humans and animals. In this paper we for the first time propose application of polymeric guanidine compounds with varying chain lengths (from 10 to 130 elementary links) for the design and synthesis of modified silver nanoparticles to be used as the basis of a new generation of PPP. Colloidal solutions of nanocrystalline silver containing 0.5 g l-1 of silver and 0.01-0.4 g l-1 of polyhexamethylene biguanide hydrochloride (PHMB) were obtained by reduction of silver nitrate with sodium borohydride in the presence of PHMB. The field experiment has shown that silver-containing solutions have a positive effect on agronomic properties of potato, wheat and apple. Also the increase in activity of such antioxidant system enzymes as peroxidase and catalase in the tissues of plants treated with nanosilver has been registered.

  6. Perspectives of Disciplinary Problems and Practices in Elementary Schools

    ERIC Educational Resources Information Center

    Huger Marsh, Darlene P.

    2012-01-01

    Ill-discipline in public schools predates compulsory education in the United States. Disciplinary policies and laws enacted to combat the problem have met with minimal success. Research and recommendations have generally focused on the indiscipline problems ubiquitous in intermediate, junior and senior high schools. However, similar misbehaviors…

  7. Minimalism as a Guiding Principle: Linking Mathematical Learning to Everyday Knowledge

    ERIC Educational Resources Information Center

    Inoue, Noriyuki

    2008-01-01

    Studies report that students often fail to consider familiar aspects of reality in solving mathematical word problems. This study explored how different features of mathematical problems influence the way that undergraduate students employ realistic considerations in mathematical problem solving. Incorporating familiar contents in the word…

  8. Optomechanical design of a buckling cavity in a low-cost high-performance ferruleless field-installable single-mode fiber connector

    NASA Astrophysics Data System (ADS)

    Ebraert, Evert; Van Erps, Jürgen; Beri, Stefano; Watté, Jan; Thienpont, Hugo

    2014-10-01

    To boost the deployment of fiber-to-the-home networks in order to meet the ever-increasing demand for bandwidth, there is a strong need for single-mode fiber (SMF) connectors which combine low insertion loss with field installability. Shifting from ferrule-based to ferruleless connectors can reduce average insertion losses appreciably and minimize modal noise interference. We propose a ferruleless connector and adaptor in which physical contact between two inline fibers is ensured by at least one fiber being in a buckled state. To this end, we design a buckling cavity in which the SMF can buckle in a controlled way to ensure good optical performance as well as mechanical stability. This design is based on both mechanical and optical considerations. Finite element analysis suggests that mechanically a minimal buckling cavity length of 17 mm is required, while the height of the cavity should be chosen such that the buckled SMF is not mechanically confined to ensure buckling in a first-order mode. The optical bending loss in the buckled SMF is calculated using a fully vectorial mode solver, showing that a minimal buckling cavity length of 20 mm is necessary to keep the excess optical loss from bending below 0.1 dB. Both our optical and mechanical simulation results are experimentally verified.

  9. Digitizing an Analog Radiography Teaching File Under Time Constraint: Trade-Offs in Efficiency and Image Quality.

    PubMed

    Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K

    2017-02-01

    We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.

  10. Ergonomic Training for Tomorrow's Office.

    ERIC Educational Resources Information Center

    Gross, Clifford M.; Chapnik, Elissa Beth

    1987-01-01

    The authors focus on issues related to the continual use of video display terminals in the office, including safety and health regulations, potential health problems, and the role of training in minimizing work-related health problems. (CH)

  11. Poverty-Exploitation-Alienation.

    ERIC Educational Resources Information Center

    Bronfenbrenner, Martin

    1980-01-01

    Illustrates how knowledge derived from the discipline of economics can be used to help shed light on social problems such as poverty, exploitation, and alienation, and can help decision makers form policy to minimize these and similar problems. (DB)

  12. Generation of High-Power High-Intensity Short X-Ray Free-Electron-Laser Pulses

    DOE PAGES

    Guetg, Marc W.; Lutman, Alberto A.; Ding, Yuantao; ...

    2018-01-03

    X-ray free-electron lasers combine a high pulse power, short pulse length, narrow bandwidth, and high degree of transverse coherence. Any increase in the photon pulse power, while shortening the pulse length, will further push the frontier on several key x-ray free-electron laser applications including single-molecule imaging and novel nonlinear x-ray methods. This Letter shows experimental results at the Linac Coherent Light Source raising its maximum power to more than 300% of the current limit while reducing the photon pulse length to 10 fs. As a result, this was achieved by minimizing residual transverse-longitudinal centroid beam offsets and beam yaw andmore » by correcting the dispersion when operating over 6 kA peak current with a longitudinally shaped beam.« less

  13. Generation of High-Power High-Intensity Short X-Ray Free-Electron-Laser Pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guetg, Marc W.; Lutman, Alberto A.; Ding, Yuantao

    X-ray free-electron lasers combine a high pulse power, short pulse length, narrow bandwidth, and high degree of transverse coherence. Any increase in the photon pulse power, while shortening the pulse length, will further push the frontier on several key x-ray free-electron laser applications including single-molecule imaging and novel nonlinear x-ray methods. This Letter shows experimental results at the Linac Coherent Light Source raising its maximum power to more than 300% of the current limit while reducing the photon pulse length to 10 fs. As a result, this was achieved by minimizing residual transverse-longitudinal centroid beam offsets and beam yaw andmore » by correcting the dispersion when operating over 6 kA peak current with a longitudinally shaped beam.« less

  14. Minimal Interventions in the Teaching of Mathematics

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    This paper addresses ways in which mathematics pedagogy can benefit from insights gleaned from counselling. Person-centred counselling stresses the value of genuineness, warm empathetic listening and minimal intervention to support people in solving their own problems and developing increased autonomy. Such an approach contrasts starkly with the…

  15. How accurate is our clinical prediction of "minimal prostate cancer"?

    PubMed

    Leibovici, Dan; Shikanov, Sergey; Gofrit, Ofer N; Zagaja, Gregory P; Shilo, Yaniv; Shalhav, Arieh L

    2013-07-01

    Recommendations for active surveillance versus immediate treatment for low risk prostate cancer are based on biopsy and clinical data, assuming that a low volume of well-differentiated carcinoma will be associated with a low progression risk. However, the accuracy of clinical prediction of minimal prostate cancer (MPC) is unclear. To define preoperative predictors for MPC in prostatectomy specimens and to examine the accuracy of such prediction. Data collected on 1526 consecutive radical prostatectomy patients operated in a single center between 2003 and 2008 included: age, body mass index, preoperative prostate-specific antigen level, biopsy Gleason score, clinical stage, percentage of positive biopsy cores, and maximal core length (MCL) involvement. MPC was defined as < 5% of prostate volume involvement with organ-confined Gleason score < or = 6. Univariate and multivariate logistic regression analyses were used to define independent predictors of minimal disease. Classification and Regression Tree (CART) analysis was used to define cutoff values for the predictors and measure the accuracy of prediction. MPC was found in 241 patients (15.8%). Clinical stage, biopsy Gleason's score, percent of positive biopsy cores, and maximal involved core length were associated with minimal disease (OR 0.42, 0.1, 0.92, and 0.9, respectively). Independent predictors of MPC included: biopsy Gleason score, percent of positive cores and MCL (OR 0.21, 095 and 0.95, respectively). CART showed that when the MCL exceeded 11.5%, the likelihood of MPC was 3.8%. Conversely, when applying the most favorable preoperative conditions (Gleason < or = 6, < 20% positive cores, MCL < or = 11.5%) the chance of minimal disease was 41%. Biopsy Gleason score, the percent of positive cores and MCL are independently associated with MPC. While preoperative prediction of significant prostate cancer was accurate, clinical prediction of MPC was incorrect 59% of the time. Caution is necessary when implementing clinical data as selection criteria for active surveillance.

  16. [A study of proximal humerus fractures using close reduction and percutaneous minimally invasive fixation].

    PubMed

    Liu, Yin-wen; Kuang, Yong; Gu, Xin-feng; Zheng, Yu-xin; Li, Zhi-qiang; Wei, Xiao-en; Lu, Wei-da; Zhan, Hong-sheng; Shi, Yin-yu

    2011-11-01

    To investigate the clinical effects of close reduction and percutaneous minimally invasive fixation in the treatment of proximal humerus fractures. From April 2008 to March 2010, 28 patients with proximal humerus fracture were treated with close reduction and percutaneous minimally invasive fixation. There were 21 males and 7 females, ranging in age from 22 to 78 years,with an average of 42.6 years. The mean time from suffering injuries to the operation was 1.7 d. Nineteen cases caused by falling down, 9 cases by traffic accident. The main clinical manifestation was swelling, pain and limited mobility of shoulders. According to Neer classification, two part fractures were in 17 cases and three part fractures in 11 cases. The locking proximal humerus plate was used to minimally fixation through deltoid muscle under acromion. The operating time,volume of blood loss, the length of incision and Constant-Murley assessment were applied to evaluate the therapeutic effects. The mean operating time was 40 min, the mean blood loss was 110 ml, and the mean length of incision was about 5.6 cm. The postoperative X-ray showed excellent reduction and the plate and screws were successfully place. Twenty-eight patients were followed up for 6 to 24 months (averaged 14.2 months). The healing time ranged from 6 to 8 weeks and all incision was primarily healed. There were no cases with necrosis head humerus, 24 cases without omalgia, and 4 cases with o-malgia occasionally. All the patients can complete the daily life. The mean score of Constant-Murley assessment was 91.0 +/- 5.8, 24 cases got an excellent result, 3 good and 1 fair. Close reduction and percutaneous minimally invasive fixation, not only can reduce surgical invasive, but also guarantee the early function activities. It has the advantages of less invasive, fixed well and less damage of blood circulation.

  17. The maximum work principle regarded as a consequence of an optimization problem based on mechanical virtual power principle and application of constructal theory

    NASA Astrophysics Data System (ADS)

    Gavrus, Adinel

    2017-10-01

    This scientific paper proposes to prove that the maximum work principle used by theory of continuum media plasticity can be regarded as a consequence of an optimization problem based on constructal theory (prof. Adrian BEJAN). It is known that the thermodynamics define the conservation of energy and the irreversibility of natural systems evolution. From mechanical point of view the first one permits to define the momentum balance equation, respectively the virtual power principle while the second one explains the tendency of all currents to flow from high to low values. According to the constructal law all finite-size system searches to evolve in such configurations that flow more and more easily over time distributing the imperfections in order to maximize entropy and to minimize the losses or dissipations. During a material forming process the application of constructal theory principles leads to the conclusion that under external loads the material flow is that which all dissipated mechanical power (deformation and friction) become minimal. On a mechanical point of view it is then possible to formulate the real state of all mechanical variables (stress, strain, strain rate) as those that minimize the total dissipated power. So between all other virtual non-equilibrium states, the real state minimizes the total dissipated power. It can be then obtained a variational minimization problem and this paper proof in a mathematical sense that starting from this formulation can be finding in a more general form the maximum work principle together with an equivalent form for the friction term. An application in the case of a plane compression of a plastic material shows the feasibility of the proposed minimization problem formulation to find analytical solution for both cases: one without friction influence and a second which take into account Tresca friction law. To valid the proposed formulation, a comparison with a classical analytical analysis based on slices, upper/lower bound methods and numerical Finite Element simulation is also presented.

  18. Minimally invasive lumbar foraminotomy.

    PubMed

    Deutsch, Harel

    2013-07-01

    Lumbar radiculopathy is a common problem. Nerve root compression can occur at different places along a nerve root's course including in the foramina. Minimal invasive approaches allow easier exposure of the lateral foramina and decompression of the nerve root in the foramina. This video demonstrates a minimally invasive approach to decompress the lumbar nerve root in the foramina with a lateral to medial decompression. The video can be found here: http://youtu.be/jqa61HSpzIA.

  19. A prospective study of leukocyte telomere length and risk of phobic anxiety among women

    PubMed Central

    Ramin, Cody; Wang, Wei; Prescott, Jennifer; Rosner, Bernard; Simon, Naomi M.; De Vivo, Immaculata; Okereke, Olivia I.

    2015-01-01

    We prospectively examined the relation of relative telomere lengths (RTLs), a marker of biological aging, to phobic anxiety in later-life. RTLs in peripheral blood leukocytes were measured among 3,194 women in the Nurses’ Health Study who provided blood samples in 1989/90. The Crown-Crisp Phobic Index (CCI, range=0-16) was assessed in 1988 and 2004. Only participants with CCI≤3 (consistent with no meaningful anxiety symptoms) in 1988 were included. We related baseline RTLs to odds ratios (ORs) of incident high phobic anxiety symptoms (CCI≥6). To enhance clinical relevance, we used finite mixture modeling (FMM) to relate baseline RTLs to latent classes of CCI in 2004. Overall, RTLs were not significantly associated with high phobic anxiety symptoms after 16 years of follow-up. However, FMM identified 3 groups of phobic symptoms in later-life: severe, minimal/intermediate, non-anxious. The severe group had non-significantly shorter multivariable-adjusted mean RTLs than the minimal/intermediate and non-anxious groups. Women with shorter telomeres vs. longest telomeres had non-significantly higher likelihood of being in the severe vs. non-anxious group. Overall, there was no significant linear association between RTLs and incident phobic anxiety symptoms. Further work is required to explore potential connections of telomere length and emergence of severe phobic anxiety symptoms during later-life. PMID:26603336

  20. Optimizing parameter of particle damping based on Leidenfrost effect of particle flows

    NASA Astrophysics Data System (ADS)

    Lei, Xiaofei; Wu, Chengjun; Chen, Peng

    2018-05-01

    Particle damping (PD) has strongly nonlinearity. With sufficiently vigorous vibration conditions, it always plays excellent damping performance and the particles which are filled into cavity are on Leidenfrost state considered in particle flow theory. For investigating the interesting phenomenon, the damping effect of PD on this state is discussed by the developed numerical model which is established based on principle of gas and solid. Furtherly, the numerical model is reformed and applied to study the relationship of Leidenfrost velocity with characteristic parameters of PD such as particle density, diameter, mass packing ratio and diameter-length ratio. The results indicate that particle density and mass packing ratio can drastically improve the damping performance as opposed as particle diameter and diameter-length ratio, mass packing ratio and diameter-length ratio can low the excited intensity for Leidenfrost state. For discussing the application of the phenomenon in engineering, bound optimization by quadratic approximation (BOBYQA) method is employed to optimize mass packing ratio of PD for minimize maximum amplitude (MMA) and minimize total vibration level (MTVL). It is noted that the particle damping can drastically reduce the vibrating amplitude for MMA as Leidenfrost velocity equal to the vibrating velocity relative to maximum vibration amplitude. For MTVL, larger mass packing ratio is best option because particles at relatively wide frequency range is adjacent to Leidenfrost state.

  1. A prospective study of leukocyte telomere length and risk of phobic anxiety among women.

    PubMed

    Ramin, Cody; Wang, Wei; Prescott, Jennifer; Rosner, Bernard; Simon, Naomi M; De Vivo, Immaculata; Okereke, Olivia I

    2015-12-15

    We prospectively examined the relation of relative telomere lengths (RTLs), a marker of biological aging, to phobic anxiety in later-life. RTLs in peripheral blood leukocytes were measured among 3194 women in the Nurses' Health Study who provided blood samples in 1989/90. The Crown-Crisp Phobic Index (CCI, range=0–16) was assessed in 1988 and 2004. Only participants with CCI≤3 (consistent with no meaningful anxiety symptoms) in 1988 were included. We related baseline RTLs to odds ratios (ORs) of incident high phobic anxiety symptoms (CCI≥6). To enhance clinical relevance, we used finite mixture modeling (FMM) to relate baseline RTLs to latent classes of CCI in 2004. RTLs were not significantly associated with high phobic anxiety symptoms after 16 years of follow-up. However, FMM identified 3 groups of phobic symptoms in later-life: severe, minimal/intermediate, and non-anxious. The severe group had non-significantly shorter multivariable-adjusted mean RTLs than the minimal/intermediate and non-anxious groups. Women with shorter telomeres vs. longest telomeres had non-significantly higher likelihood of being in the severe vs. non-anxious group. Overall, there was no significant association between RTLs and incident phobic anxiety symptoms. Further work is required to explore potential connections of telomere length and emergence of severe phobic anxiety symptoms during later-life.

  2. National proficiency-gain curves for minimally invasive gastrointestinal cancer surgery.

    PubMed

    Mackenzie, H; Markar, S R; Askari, A; Ni, M; Faiz, O; Hanna, G B

    2016-01-01

    Minimal access surgery for gastrointestinal cancer has short-term benefits but is associated with a proficiency-gain curve. The aim of this study was to define national proficiency-gain curves for minimal access colorectal and oesophagogastric surgery, and to determine the impact on clinical outcomes. All adult patients undergoing minimal access oesophageal, colonic and rectal surgery between 2002 and 2012 were identified from the Hospital Episode Statistics database. Proficiency-gain curves were created using risk-adjusted cumulative sum analysis. Change points were identified, and bootstrapping was performed with 1000 iterations to identify a confidence level. The primary outcome was 30-day mortality; secondary outcomes were 90-day mortality, reintervention, conversion and length of hospital stay. Some 1696, 15 008 and 16 701 minimal access oesophageal, rectal and colonic cancer resections were performed during the study period. The change point in the proficiency-gain curve for 30-day mortality for oesophageal, rectal and colonic surgery was 19 (confidence level 98·4 per cent), 20 (99·2 per cent) and three (99·5 per cent) procedures; the mortality rate fell from 4·0 to 2·0 per cent (relative risk reduction (RRR) 0·50, P = 0·033), from 2·1 to 1·2 per cent (RRR 0·43, P < 0·001) and from 2·4 to 1·8 per cent (RRR 0·25, P = 0·058) respectively. The change point in the proficiency-gain curve for reintervention in oesophageal, rectal and colonic resection was 19 (98·1 per cent), 32 (99·5 per cent) and 26 (99·2 per cent) procedures respectively. There were also significant proficiency-gain curves for 90-day mortality, conversion and length of stay. The introduction of minimal access gastrointestinal cancer surgery has been associated with a proficiency-gain curve for mortality and major morbidity at a national level. Unnecessary patient harm should be avoided by appropriate training and monitoring of new surgical techniques. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  3. Is the problem list in the eye of the beholder? An exploration of consistency across physicians.

    PubMed

    Krauss, John C; Boonstra, Philip S; Vantsevich, Anna V; Friedman, Charles P

    2016-09-01

    Quantify the variability of patients' problem lists - in terms of the number, type, and ordering of problems - across multiple physicians and assess physicians' criteria for organizing and ranking diagnoses. In an experimental setting, 32 primary care physicians generated and ordered problem lists for three identical complex internal medicine cases expressed as detailed 2- to 4-page abstracts and subsequently expressed their criteria for ordering items in the list. We studied variability in problem list length. We modified a previously validated rank-based similarity measure, with range of zero to one, to quantify agreement between pairs of lists and calculate a single consensus problem list that maximizes agreement with each physician. Physicians' reasoning for the ordering of the problem lists was recorded. Subjects' problem lists were highly variable. The median problem list length was 8 (range: 3-14) for Case A, 10 (range: 4-20) for Case B, and 7 (range: 3-13) for Case C. The median indices of agreement - taking into account the length, content, and order of lists - over all possible physician pairings was 0.479, 0.371, 0.509, for Cases A, B, and C, respectively. The median agreements between the physicians' lists and the consensus list for each case were 0.683, 0.581, and 0.697 (for Cases A, B, and C, respectively).Out of a possible 1488 pairings, 2 lists were identical. Physicians most frequently ranked problem list items based on their acuity and immediate threat to health. The problem list is a physician's mental model of a patient's health status. These mental models were found to vary significantly between physicians, raising questions about whether problem lists created by individual physicians can serve their intended purpose to improve care coordination. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. The dynamics of oceanic fronts. I - The Gulf Stream

    NASA Technical Reports Server (NTRS)

    Kao, T. W.

    1980-01-01

    The establishment and maintenance of the mean hydrographic properties of large-scale density fronts in the upper ocean is considered. The dynamics is studied by posing an initial value problem starting with a near-surface discharge of buoyant water with a prescribed density deficit into an ambient stationary fluid of uniform density; full time dependent diffusion and Navier-Stokes equations are then used with constant eddy diffusion and viscosity coefficients, together with a constant Coriolis parameter. Scaling analysis reveals three independent scales of the problem including the radius of deformation of the inertial length, buoyancy length, and diffusive length scales. The governing equations are then suitably scaled and the resulting normalized equations are shown to depend on the Ekman number alone for problems of oceanic interest. It is concluded that the mean Gulf Stream dynamics can be interpreted in terms of a solution of the Navier-Stokes and diffusion equations, with the cross-stream circulation responsible for the maintenance of the front; this mechanism is suggested for the maintenance of the Gulf Stream dynamics.

  5. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  6. Balancing antagonistic time and resource utilization constraints in over-subscribed scheduling problems

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.; Pathak, Dhiraj K.

    1991-01-01

    In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.

  7. Percentiles of the run-length distribution of the Exponentially Weighted Moving Average (EWMA) median chart

    NASA Astrophysics Data System (ADS)

    Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.

    2017-09-01

    Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.

  8. Thick Filament Length and Isoform Composition Determine Self-Organized Contractile Units in Actomyosin Bundles

    PubMed Central

    Thoresen, Todd; Lenz, Martin; Gardel, Margaret L.

    2013-01-01

    Diverse myosin II isoforms regulate contractility of actomyosin bundles in disparate physiological processes by variations in both motor mechanochemistry and the extent to which motors are clustered into thick filaments. Although the role of mechanochemistry is well appreciated, the extent to which thick filament length regulates actomyosin contractility is unknown. Here, we study the contractility of minimal actomyosin bundles formed in vitro by mixtures of F-actin and thick filaments of nonmuscle, smooth, and skeletal muscle myosin isoforms with varied length. Diverse myosin II isoforms guide the self-organization of distinct contractile units within in vitro bundles with shortening rates similar to those of in vivo myofibrils and stress fibers. The tendency to form contractile units increases with the thick filament length, resulting in a bundle shortening rate proportional to the length of constituent myosin thick filament. We develop a model that describes our data, providing a framework in which to understand how diverse myosin II isoforms regulate the contractile behaviors of disordered actomyosin bundles found in muscle and nonmuscle cells. These experiments provide insight into physiological processes that use dynamic regulation of thick filament length, such as smooth muscle contraction. PMID:23442916

  9. Lateral-Line Detection of Underwater Objects: From Goldfish to Submarines

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2010-03-01

    Fish and some aquatic amphibians use their mechanosensory lateral-line system to navigate by means of hydrodynamic cues. How a fish determines an object's position and shape only through the lateral-line system and the ensuing neuronal processing is still a challenging problem. Our studies have shown that both stimulus position and stimulus form can be determined within the range of about one fish length and are encoded through the response of the afferent nerves originating from the detectors. A minimal detection model of a vibrating sphere (a dipole) has now been extended to other stimuli such as translating spheres, ellipsoids, or even wakes (vortex rings). The theoretical model is fully verified by experimental data. We have also constructed an underwater robot with an artificial lateral-line system designed to detect e.g. the presence of walls by measuring the change of water flow around the body. We will show how a simple model fits experimental results obtained from trout and goldfish and how a submarine may well be able to detect underwater objects by using an artificial lateral-line system.

  10. Thermal Catalytic Oxidation of Airborne Contaminants by a Reactor Using Ultra-Short Channel Length, Monolithic Catalyst Substrates

    NASA Technical Reports Server (NTRS)

    Perry, J. L.; Tomes, K. M.; Tatara, J. D.

    2005-01-01

    Contaminated air, whether in a crewed spacecraft cabin or terrestrial work and living spaces, is a pervasive problem affecting human health, performance, and well being. The need for highly effective, economical air quality processes spans a wide range of terrestrial and space flight applications. Typically, air quality control processes rely on absorption-based processes. Most industrial packed-bed adsorption processes use activated carbon. Once saturated, the carbon is either dumped or regenerated. In either case, the dumped carbon and concentrated waste streams constitute a hazardous waste that must be handled safely while minimizing environmental impact. Thermal catalytic oxidation processes designed to address waste handling issues are moving to the forefront of cleaner air quality control and process gas decontamination processes. Careful consideration in designing the catalyst substrate and reactor can lead to more complete contaminant destruction and poisoning resistance. Maintenance improvements leading to reduced waste handling and process downtime can also be realized. Performance of a prototype thermal catalytic reaction based on ultra-short waste channel, monolith catalyst substrate design, under a variety of process flow and contaminant loading conditions, is discussed.

  11. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    PubMed

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  12. High-Frequency Replanning Under Uncertainty Using Parallel Sampling-Based Motion Planning

    PubMed Central

    Sun, Wen; Patil, Sachin; Alterovitz, Ron

    2015-01-01

    As sampling-based motion planners become faster, they can be re-executed more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot’s kinematic model. We investigate and analyze high-frequency replanning (HFR), where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles. PMID:26279645

  13. Novel characteristics of energy spectrum for 3D Dirac oscillator analyzed via Lorentz covariant deformed algebra

    PubMed Central

    Betrouche, Malika; Maamache, Mustapha; Choi, Jeong Ryeol

    2013-01-01

    We investigate the Lorentz-covariant deformed algebra for Dirac oscillator problem, which is a generalization of Kempf deformed algebra in 3 + 1 dimension of space-time, where Lorentz symmetry are preserved. The energy spectrum of the system is analyzed by taking advantage of the corresponding wave functions with explicit spin state. We obtained entirely new results from our development based on Kempf algebra in comparison to the studies carried out with the non-Lorentz-covariant deformed one. A novel result of this research is that the quantized relativistic energy of the system in the presence of minimal length cannot grow indefinitely as quantum number n increases, but converges to a finite value, where c is the speed of light and β is a parameter that determines the scale of noncommutativity in space. If we consider the fact that the energy levels of ordinary oscillator is equally spaced, which leads to monotonic growth of quantized energy with the increment of n, this result is very interesting. The physical meaning of this consequence is discussed in detail. PMID:24225900

  14. Novel characteristics of energy spectrum for 3D Dirac oscillator analyzed via Lorentz covariant deformed algebra.

    PubMed

    Betrouche, Malika; Maamache, Mustapha; Choi, Jeong Ryeol

    2013-11-14

    We investigate the Lorentz-covariant deformed algebra for Dirac oscillator problem, which is a generalization of Kempf deformed algebra in 3 + 1 dimension of space-time, where Lorentz symmetry are preserved. The energy spectrum of the system is analyzed by taking advantage of the corresponding wave functions with explicit spin state. We obtained entirely new results from our development based on Kempf algebra in comparison to the studies carried out with the non-Lorentz-covariant deformed one. A novel result of this research is that the quantized relativistic energy of the system in the presence of minimal length cannot grow indefinitely as quantum number n increases, but converges to a finite value, where c is the speed of light and β is a parameter that determines the scale of noncommutativity in space. If we consider the fact that the energy levels of ordinary oscillator is equally spaced, which leads to monotonic growth of quantized energy with the increment of n, this result is very interesting. The physical meaning of this consequence is discussed in detail.

  15. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    NASA Astrophysics Data System (ADS)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  16. Standardization and Optimization of Computed Tomography Protocols to Achieve Low-Dose

    PubMed Central

    Chin, Cynthia; Cody, Dianna D.; Gupta, Rajiv; Hess, Christopher P.; Kalra, Mannudeep K.; Kofler, James M.; Krishnam, Mayil S.; Einstein, Andrew J.

    2014-01-01

    The increase in radiation exposure due to CT scans has been of growing concern in recent years. CT scanners differ in their capabilities and various indications require unique protocols, but there remains room for standardization and optimization. In this paper we summarize approaches to reduce dose, as discussed in lectures comprising the first session of the 2013 UCSF Virtual Symposium on Radiation Safety in Computed Tomography. The experience of scanning at low dose in different body regions, for both diagnostic and interventional CT procedures, is addressed. An essential primary step is justifying the medical need for each scan. General guiding principles for reducing dose include tailoring a scan to a patient, minimizing scan length, use of tube current modulation and minimizing tube current, minimizing-tube potential, iterative reconstruction, and periodic review of CT studies. Organized efforts for standardization have been spearheaded by professional societies such as the American Association of Physicists in Medicine. Finally, all team members should demonstrate an awareness of the importance of minimizing dose. PMID:24589403

  17. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  18. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  19. Technological Minimalism: A Cost-Effective Alternative for Course Design and Development.

    ERIC Educational Resources Information Center

    Lorenzo, George

    2001-01-01

    Discusses the use of minimum levels of technology, or technological minimalism, for Web-based multimedia course content. Highlights include cost effectiveness; problems with video streaming, the use of XML for Web pages, and Flash and Java applets; listservs instead of proprietary software; and proper faculty training. (LRW)

  20. Safety in the Chemical Laboratory: Flood Control.

    ERIC Educational Resources Information Center

    Pollard, Bruce D.

    1983-01-01

    Describes events leading to a flood in the Wehr Chemistry Laboratory at Marquette University, discussing steps taken to minimize damage upon discovery. Analyzes the problem of flooding in the chemical laboratory and outlines seven steps of flood control: prevention; minimization; early detection; stopping the flood; evaluation; clean-up; and…

Top