Sample records for minimize potential problems

  1. Blow-up behavior of ground states for a nonlinear Schrödinger system with attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Zeng, Xiaoyu; Zhou, Huan-Song

    2018-01-01

    We consider a nonlinear Schrödinger system arising in a two-component Bose-Einstein condensate (BEC) with attractive intraspecies interactions and repulsive interspecies interactions in R2. We get ground states of this system by solving a constrained minimization problem. For some kinds of trapping potentials, we prove that the minimization problem has a minimizer if and only if the attractive interaction strength ai (i = 1 , 2) of each component of the BEC system is strictly less than a threshold a*. Furthermore, as (a1 ,a2) ↗ (a* ,a*), the asymptotical behavior for the minimizers of the minimization problem is discussed. Our results show that each component of the BEC system concentrates at a global minimum of the associated trapping potential.

  2. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1989-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  3. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1987-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  4. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  5. Development of sinkholes resulting from man's activities in the Eastern United States

    USGS Publications Warehouse

    Newton, John G.

    1987-01-01

    Alternatives that allow avoiding or minimizing sinkhole hazards are most numerous when a problem or potential problem is recognized during site evaluation. The number of alternatives declines after the beginning of site development. Where sinkhole development is predictable, zoning of land use can minimize hazards.

  6. Ergonomic Training for Tomorrow's Office.

    ERIC Educational Resources Information Center

    Gross, Clifford M.; Chapnik, Elissa Beth

    1987-01-01

    The authors focus on issues related to the continual use of video display terminals in the office, including safety and health regulations, potential health problems, and the role of training in minimizing work-related health problems. (CH)

  7. Flattening the inflaton potential beyond minimal gravity

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Min

    2018-01-01

    We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  8. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  9. On the heteroclinic connection problem for multi-well gradient systems

    NASA Astrophysics Data System (ADS)

    Zuniga, Andres; Sternberg, Peter

    2016-10-01

    We revisit the existence problem of heteroclinic connections in RN associated with Hamiltonian systems involving potentials W :RN → R having several global minima. Under very mild assumptions on W we present a simple variational approach to first find geodesics minimizing length of curves joining any two of the potential wells, where length is computed with respect to a degenerate metric having conformal factor √{ W}. Then we show that when such a minimizing geodesic avoids passing through other wells of the potential at intermediate times, it gives rise to a heteroclinic connection between the two wells. This work improves upon the approach of [22] and represents a more geometric alternative to the approaches of e.g. [5,10,14,17] for finding such connections.

  10. Effect of Causal Stories in Solving Mathematical Story Problems

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Gerretson, Helen; Olkun, Sinan; Joutsenlahti, Jorma

    2010-01-01

    This study investigated whether infusing "causal" story elements into mathematical word problems improves student performance. In one experiment in the USA and a second in USA, Finland and Turkey, undergraduate elementary education majors worked word problems in three formats: 1) standard (minimal verbiage), 2) potential causation…

  11. Limit behavior of mass critical Hartree minimization problems with steep potential wells

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Luo, Yong; Wang, Zhi-Qiang

    2018-06-01

    We consider minimizers of the following mass critical Hartree minimization problem: eλ(N ) ≔inf {u ∈H1(Rd ) , ‖u‖2 2=N } Eλ(u ) , where d ≥ 3, λ > 0, and the Hartree energy functional Eλ(u) is defined by Eλ(u ) ≔∫Rd|∇u (x ) |2d x +λ ∫Rdg (x ) u2(x ) d x -1/2 ∫Rd∫Rdu/2(x ) u2(y ) |x -y |2 d x d y . Here the steep potential g(x) satisfies 0 =g (0 ) =infRdg (x ) ≤g (x ) ≤1 and 1 -g (x ) ∈Ld/2(Rd ) . We prove that there exists a constant N* > 0, independent of λg(x), such that if N ≥ N*, then eλ(N) does not admit minimizers for any λ > 0; if 0 < N < N*, then there exists a constant λ*(N) > 0 such that eλ(N) admits minimizers for any λ > λ*(N) and eλ(N) does not admit minimizers for 0 < λ < λ*(N). For any given 0 < N < N*, the limit behavior of positive minimizers for eλ(N) is also studied as λ → ∞, where the mass concentrates at the bottom of g(x).

  12. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less

  13. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  14. Balancing antagonistic time and resource utilization constraints in over-subscribed scheduling problems

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.; Pathak, Dhiraj K.

    1991-01-01

    In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.

  15. Drag Minimization for Wings and Bodies in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Fuller, Franklyn B

    1958-01-01

    The minimization of inviscid fluid drag is studied for aerodynamic shapes satisfying the conditions of linearized theory, and subject to imposed constraints on lift, pitching moment, base area, or volume. The problem is transformed to one of determining two-dimensional potential flows satisfying either Laplace's or Poisson's equations with boundary values fixed by the imposed conditions. A general method for determining integral relations between perturbation velocity components is developed. This analysis is not restricted in application to optimum cases; it may be used for any supersonic wing problem.

  16. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    PubMed

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  17. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  18. Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources

    PubMed Central

    Leeson, Mark S.

    2014-01-01

    The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371

  19. Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture

    NASA Astrophysics Data System (ADS)

    Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan

    2015-09-01

    This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.

  20. Random potentials and cosmological attractors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linde, Andrei, E-mail: alinde@stanford.edu

    I show that the problem of realizing inflation in theories with random potentials of a limited number of fields can be solved, and agreement with the observational data can be naturally achieved if at least one of these fields has a non-minimal kinetic term of the type used in the theory of cosmological α-attractors.

  1. Minimally conscious state or cortically mediated state?

    PubMed

    Naccache, Lionel

    2018-04-01

    Durable impairments of consciousness are currently classified in three main neurological categories: comatose state, vegetative state (also recently coined unresponsive wakefulness syndrome) and minimally conscious state. While the introduction of minimally conscious state, in 2002, was a major progress to help clinicians recognize complex non-reflexive behaviours in the absence of functional communication, it raises several problems. The most important issue related to minimally conscious state lies in its criteria: while behavioural definition of minimally conscious state lacks any direct evidence of patient's conscious content or conscious state, it includes the adjective 'conscious'. I discuss this major problem in this review and propose a novel interpretation of minimally conscious state: its criteria do not inform us about the potential residual consciousness of patients, but they do inform us with certainty about the presence of a cortically mediated state. Based on this constructive criticism review, I suggest three proposals aiming at improving the way we describe the subjective and cognitive state of non-communicating patients. In particular, I present a tentative new classification of impairments of consciousness that combines behavioural evidence with functional brain imaging data, in order to probe directly and univocally residual conscious processes.

  2. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model

    PubMed Central

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach. PMID:26225761

  3. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  4. Regulating Hazardous-materials Transportation with Behavioral Modeling of Drivers

    DOT National Transportation Integrated Search

    2018-01-29

    Changhyun Kwon (ORCID ID 0000-0001-8455-6396) This project considers network regulation problems to minimize the risk of hazmat accidents and potential damages to the environment, while considering bounded rationality of drivers. We consider governme...

  5. Linear Matrix Inequality Method for a Quadratic Performance Index Minimization Problem with a class of Bilinear Matrix Inequality Conditions

    NASA Astrophysics Data System (ADS)

    Tanemura, M.; Chida, Y.

    2016-09-01

    There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.

  6. Electrical Resistivity Tomography using a finite element based BFGS algorithm with algebraic multigrid preconditioning

    NASA Astrophysics Data System (ADS)

    Codd, A. L.; Gross, L.

    2018-03-01

    We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia.

  7. Rate-independent dissipation in phase-field modelling of displacive transformations

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2018-05-01

    In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.

  8. Minimal intervention dentistry II: part 6. Microscope and microsurgical techniques in periodontics.

    PubMed

    Sitbon, Y; Attathom, T

    2014-05-01

    Different aspects of treatment for periodontal diseases or gingival problems require rigorous diagnostics. Magnification tools and microsurgical instruments, combined with minimally invasive techniques can provide the best solutions in such cases. Relevance of treatments, duration of healing, reduction of pain and post-operative scarring have the potential to be improved for patients through such techniques. This article presents an overview of the use of microscopy in periodontics, still in the early stages of development.

  9. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  10. On a Minimum Problem in Smectic Elastomers

    NASA Astrophysics Data System (ADS)

    Buonsanti, Michele; Giovine, Pasquale

    2008-07-01

    Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.

  11. Computation and analysis for a constrained entropy optimization problem in finance

    NASA Astrophysics Data System (ADS)

    He, Changhong; Coleman, Thomas F.; Li, Yuying

    2008-12-01

    In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.

  12. A Self-Organizing Incremental Spatiotemporal Associative Memory Networks Model for Problems with Hidden State

    PubMed Central

    2016-01-01

    Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods. PMID:27891146

  13. A framework for multi-stakeholder decision-making and ...

    EPA Pesticide Factsheets

    We propose a decision-making framework to compute compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives. In our setting, we shape the stakeholder dis-satisfaction distribution by solving a conditional-value-at-risk (CVaR) minimization problem. The CVaR problem is parameterized by a probability level that shapes the tail of the dissatisfaction distribution. The proposed approach allows us to compute a family of compromise solutions and generalizes multi-stakeholder settings previously proposed in the literature that minimize average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem +and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework that involve complex decision-making processes. We demonstrate the developments using a biowaste facility location case study in which we seek to balance stakeholder priorities on transportation, safety, water quality, and capital costs. This manuscript describes the methodology of a new decision-making framework that computes compromise solutions that balance conflicting priorities of multiple stakeholders on multiple objectives as needed for SHC Decision Science and Support Tools project. A biowaste facility location is employed as the case study

  14. Passive force balancing of an active magnetic regenerative liquefier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teyber, R.; Meinhardt, K.; Thomsen, E.

    Active magnetic regenerators (AMR) have the potential for high efficiency cryogen liquefaction. One active magnetic regenerative liquefier (AMRL) configuration consists of dual magnetocaloric regenerators that reciprocate in a persistent-mode superconducting solenoid. Issues with this configuration are the spatial and temporal magnetization gradients that induce large magnetic forces and winding currents. To solve the coupled problem, we present a force minimization approach using passive magnetic material to balance a dual-regenerator AMR. A magnetostatic model is developed and simulated force waveforms are compared with experimental measurements. A genetic algorithm identifies force-minimizing passive structures with virtually ideal balancing characteristics. Finally, implementation details aremore » investigated which affirm the potential of the proposed methodology.« less

  15. Passive force balancing of an active magnetic regenerative liquefier

    DOE PAGES

    Teyber, R.; Meinhardt, K.; Thomsen, E.; ...

    2017-11-02

    Active magnetic regenerators (AMR) have the potential for high efficiency cryogen liquefaction. One active magnetic regenerative liquefier (AMRL) configuration consists of dual magnetocaloric regenerators that reciprocate in a persistent-mode superconducting solenoid. Issues with this configuration are the spatial and temporal magnetization gradients that induce large magnetic forces and winding currents. To solve the coupled problem, we present a force minimization approach using passive magnetic material to balance a dual-regenerator AMR. A magnetostatic model is developed and simulated force waveforms are compared with experimental measurements. A genetic algorithm identifies force-minimizing passive structures with virtually ideal balancing characteristics. Finally, implementation details aremore » investigated which affirm the potential of the proposed methodology.« less

  16. Passive force balancing of an active magnetic regenerative liquefier

    NASA Astrophysics Data System (ADS)

    Teyber, R.; Meinhardt, K.; Thomsen, E.; Polikarpov, E.; Cui, J.; Rowe, A.; Holladay, J.; Barclay, J.

    2018-04-01

    Active magnetic regenerators (AMR) have the potential for high efficiency cryogen liquefaction. One active magnetic regenerative liquefier (AMRL) configuration consists of dual magnetocaloric regenerators that reciprocate in a persistent-mode superconducting solenoid. Issues with this configuration are the spatial and temporal magnetization gradients that induce large magnetic forces and winding currents. To solve the coupled problem, we present a force minimization approach using passive magnetic material to balance a dual-regenerator AMR. A magnetostatic model is developed and simulated force waveforms are compared with experimental measurements. A genetic algorithm identifies force-minimizing passive structures with virtually ideal balancing characteristics. Implementation details are investigated which affirm the potential of the proposed methodology.

  17. One-dimensional Gromov minimal filling problem

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandr O.; Tuzhilin, Alexey A.

    2012-05-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  18. Terahertz NDE for Under Paint Corrosion Detection and Evaluation

    NASA Technical Reports Server (NTRS)

    Anastasi, Robert F.; Madaras, Eric I.

    2005-01-01

    Corrosion under paint is not visible until it has caused paint to blister, crack, or chip. If corrosion is allowed to continue then structural problems may develop. Identifying corrosion before it becomes visible would minimize repairs and costs and potential structural problems. Terahertz NDE imaging under paint for corrosion is being examined as a method to inspect for corrosion by examining the terahertz response to paint thickness and to surface roughness.

  19. Planning nonlinear access paths for temporal bone surgery.

    PubMed

    Fauser, Johannes; Sakas, Georgios; Mukhopadhyay, Anirban

    2018-05-01

    Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potential minimally invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. We define a specific motion planning problem in [Formula: see text] with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery. We then present [Formula: see text]-RRT-Connect: two suitable motion planners based on bidirectional Rapidly exploring Random Tree (RRT) to solve this problem efficiently. The benefits of [Formula: see text]-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state-of-the-art methods based on circular arcs or Bézier-Splines when applied to this specific problem. With this work, we demonstrate that preoperative and intra-operative planning of nonlinear access paths is possible for minimally invasive surgeries at the otobasis.

  20. A Framework for Multi-Stakeholder Decision-Making and ...

    EPA Pesticide Factsheets

    This contribution describes the implementation of the conditional-value-at-risk (CVaR) metric to create a general multi-stakeholder decision-making framework. It is observed that stakeholder dissatisfactions (distance to their individual ideal solutions) can be interpreted as random variables. We thus shape the dissatisfaction distribution and find an optimal compromise solution by solving a CVaR minimization problem parameterized in the probability level. This enables us to generalize multi-stakeholder settings previously proposed in the literature that minimizes average and worst-case dissatisfactions. We use the concept of the CVaR norm to give a geometric interpretation to this problem and use the properties of this norm to prove that the CVaR minimization problem yields Pareto optimal solutions for any choice of the probability level. We discuss a broad range of potential applications of the framework. We demonstrate the framework in a bio-waste processing facility location case study, where we seek compromise solutions (facility locations) that balance stakeholder priorities on transportation, safety, water quality, and capital costs. This conference presentation abstract explains a new decision-making framework that computes compromise solution alternatives (reach consensus) by mitigating dissatisfactions among stakeholders as needed for SHC Decision Science and Support Tools project.

  1. The Relative Merits of PBL (Problem-Based Learning) in University Education

    ERIC Educational Resources Information Center

    Benson, Steve

    2012-01-01

    In Australia, academic workloads are increasing, and university funding is decreasing. Academics and university managers are engaging in risk adverse behavior and tending to focus on customer satisfaction and student retention, potentially at the expense of academic standards. Conventional approaches to pedagogy minimize adverse student feedback,…

  2. Managing hazardous waste in the clinical laboratory.

    PubMed

    Hoeltge, G A

    1989-09-01

    Clinical laboratories generate wastes that present chemical and biologic hazards. Ignitable, corrosive, reactive, toxic, and infectious potentials must be contained and minimized. A summary of these problems and an overview of the applicable regulations are presented. A checklist of activities to facilitate the annual review of the hazardous waste program is provided.

  3. Advisory on Relocatable and Renovated Classrooms. IAQ Info Sheet.

    ERIC Educational Resources Information Center

    California State Dept. of Health Services, Berkeley.

    Many California school districts, in complying with the Class Size Reduction Program, will obtain relocatable classrooms directly from manufacturers who are under no specific guidelines or codes relative to indoor air quality (IAQ). This document, designed to aid school facility managers in minimizing potential IAQ problems, summarizes the indoor…

  4. Green School Checklist: Environmental Actions for Schools To Consider.

    ERIC Educational Resources Information Center

    Illinois Environmental Protection Agency, Springfield.

    This checklist offers tips and resources to help schools identify opportunities to "green" their buildings and operations, focusing on common-sense improvements that schools can make in their daily operations to minimize or stop potential health and environmental problems before they start. The first section discusses the benefits of a…

  5. New periodic solutions for some planar N + 3-body problems with Newtonian potentials

    NASA Astrophysics Data System (ADS)

    Yuan, Pengfei; Zhang, Shiqing

    2018-03-01

    For some planar Newtonian N + 3-body problems, we use variational minimization methods to prove the existence of new periodic solutions satisfying that N bodies chase each other on a curve, and the other 3 bodies chase each other on another curve. From the definition of orbit spaces in our paper, we can find that they are new solutions which are also different from all the examples of Ferrario and Terracini (2004).

  6. Avoiding potential problems when selling accounts receivable.

    PubMed

    Ayers, D H; Kincaid, T J

    1996-05-01

    Accounts receivable financing is a potential tool for managing a provider organization's working capital needs. But before entering into a financing agreement, organizations need to consider and take steps to avoid serious problems that can arise from participation in an accounts receivable financing program. For example, the purchaser may cease purchasing the receivables, leaving the organization without funding needed for operations. Or, the financing program may be inordinately complex and unnecessarily costly to the organization. Sometimes the organization itself may fail to comply with the terms of the agreement under which the accounts receivable were sold, thus necessitating that restitution be made to the purchaser or provoking charges of fraud. These potential problems should be addressed as early as possible--before an organization enters into an accounts receivable financing program--in order to minimize time, effort, and expanse and maximize the benefits of the financing agreement.

  7. Running non-minimal inflation with stabilized inflaton potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Nobuchika; Raut, Digesh

    In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider λφ4 inflation with non- minimal gravitational coupling, where the Higgs field is identified as the inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. Furthermore, in order to avoid this problem, we need to impose stability conditions on the effective inflaton potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but alsomore » correlations between the inflationary predictions and the mass spectrum. For reasons of concrete discussion, we investigate the minimal B - L extension of the standard model with identification of the B - L Higgs field as the inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the B - L gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. So, if the B - L gauge boson and the right-handed neutrinos are discovered in the future, their observed mass ratio provides constraints on the inflationary predictions.« less

  8. Running non-minimal inflation with stabilized inflaton potential

    DOE PAGES

    Okada, Nobuchika; Raut, Digesh

    2017-04-18

    In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider λφ4 inflation with non- minimal gravitational coupling, where the Higgs field is identified as the inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. Furthermore, in order to avoid this problem, we need to impose stability conditions on the effective inflaton potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but alsomore » correlations between the inflationary predictions and the mass spectrum. For reasons of concrete discussion, we investigate the minimal B - L extension of the standard model with identification of the B - L Higgs field as the inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the B - L gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. So, if the B - L gauge boson and the right-handed neutrinos are discovered in the future, their observed mass ratio provides constraints on the inflationary predictions.« less

  9. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  10. Shape space figure-8 solution of three body problem with two equal masses

    NASA Astrophysics Data System (ADS)

    Yu, Guowei

    2017-06-01

    In a preprint by Montgomery (https://people.ucsc.edu/~rmont/Nbdy.html), the author attempted to prove the existence of a shape space figure-8 solution of the Newtonian three body problem with two equal masses (it looks like a figure 8 in the shape space, which is different from the famous figure-8 solution with three equal masses (Chenciner and Montgomery 2000 Ann. Math. 152 881-901)). Unfortunately there is an error in the proof and the problem is still open. Consider the α-homogeneous Newton-type potential, 1/rα, using action minimization method, we prove the existence of this solution, for α \\in (1, 2) ; for α=1 (the Newtonian potential), an extra condition is required, which unfortunately seems hard to verify at this moment.

  11. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  12. Estimation of the zeta potential and the dielectric constant using velocity measurements in the electroosmotic flows.

    PubMed

    Park, H M; Hong, S M

    2006-12-15

    In this paper we develop a method for the determination of the zeta potential zeta and the dielectric constant epsilon by exploiting velocity measurements of the electroosmotic flow in microchannels. The inverse problem is solved through the minimization of a performance function utilizing the conjugate gradient method. The present method is found to estimate zeta and epsilon with reasonable accuracy even with noisy velocity measurements.

  13. A multiple-drawer medication layout problem in automated dispensing cabinets.

    PubMed

    Pazour, Jennifer A; Meller, Russell D

    2012-12-01

    In this paper we investigate the problem of locating medications in automated dispensing cabinets (ADCs) to minimize human selection errors. We formulate the multiple-drawer medication layout problem and show that the problem can be formulated as a quadratic assignment problem. As a way to evaluate various medication layouts, we develop a similarity rating for medication pairs. To solve industry-sized problem instances, we develop a heuristic approach. We use hospital ADC transaction data to conduct a computational experiment to test the performance of our developed heuristics, to demonstrate how our approach can aid in ADC design trade-offs, and to illustrate the potential improvements that can be made when applying an analytical process to the multiple-drawer medication layout problem. Finally, we present conclusions and future research directions.

  14. Alternative fuels

    NASA Technical Reports Server (NTRS)

    Grobman, J. S.; Butze, H. F.; Friedman, R.; Antoine, A. C.; Reynolds, T. W.

    1977-01-01

    Potential problems related to the use of alternative aviation turbine fuels are discussed and both ongoing and required research into these fuels is described. This discussion is limited to aviation turbine fuels composed of liquid hydrocarbons. The advantages and disadvantages of the various solutions to the problems are summarized. The first solution is to continue to develop the necessary technology at the refinery to produce specification jet fuels regardless of the crude source. The second solution is to minimize energy consumption at the refinery and keep fuel costs down by relaxing specifications.

  15. Detecting number processing and mental calculation in patients with disorders of consciousness using a hybrid brain-computer interface system.

    PubMed

    Li, Yuanqing; Pan, Jiahui; He, Yanbin; Wang, Fei; Laureys, Steven; Xie, Qiuyou; Yu, Ronghao

    2015-12-15

    For patients with disorders of consciousness such as coma, a vegetative state or a minimally conscious state, one challenge is to detect and assess the residual cognitive functions in their brains. Number processing and mental calculation are important brain functions but are difficult to detect in patients with disorders of consciousness using motor response-based clinical assessment scales such as the Coma Recovery Scale-Revised due to the patients' motor impairments and inability to provide sufficient motor responses for number- and calculation-based communication. In this study, we presented a hybrid brain-computer interface that combines P300 and steady state visual evoked potentials to detect number processing and mental calculation in Han Chinese patients with disorders of consciousness. Eleven patients with disorders of consciousness who were in a vegetative state (n = 6) or in a minimally conscious state (n = 3) or who emerged from a minimally conscious state (n = 2) participated in the brain-computer interface-based experiment. During the experiment, the patients with disorders of consciousness were instructed to perform three tasks, i.e., number recognition, number comparison, and mental calculation, including addition and subtraction. In each experimental trial, an arithmetic problem was first presented. Next, two number buttons, only one of which was the correct answer to the problem, flickered at different frequencies to evoke steady state visual evoked potentials, while the frames of the two buttons flashed in a random order to evoke P300 potentials. The patients needed to focus on the target number button (the correct answer). Finally, the brain-computer interface system detected P300 and steady state visual evoked potentials to determine the button to which the patients attended, further presenting the results as feedback. Two of the six patients who were in a vegetative state, one of the three patients who were in a minimally conscious state, and the two patients that emerged from a minimally conscious state achieved accuracies significantly greater than the chance level. Furthermore, P300 potentials and steady state visual evoked potentials were observed in the electroencephalography signals from the five patients. Number processing and arithmetic abilities as well as command following were demonstrated in the five patients. Furthermore, our results suggested that through brain-computer interface systems, many cognitive experiments may be conducted in patients with disorders of consciousness, although they cannot provide sufficient behavioral responses.

  16. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  17. Improved Properties of Medium-Density Particleboard Manufactured from Saline Creeping Wild Rye and HDPE Plastic

    USDA-ARS?s Scientific Manuscript database

    Creeping Wild Rye (CWR), Leymus triticoides, is a salt-tolerant perennial grass used for mitigating the problems of saltilization and alkalization in drainage irrigation water and soil to minimize potential pollution of water streams. In this study, CWR was used as a raw material to manufacture med...

  18. Carbon fiber study

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A coordinated Federal Government action plan for dealing with the potential problems arising from the increasing use of graphite fiber reinforced composite materials in both military and civilian applications is presented. The required dissemination of declassified information and an outline of government actions to minimize the social and economic consequences of proliferated composite materials applications were included.

  19. Uterine Fibroid Embolisation – Potential Impact on Fertility and Pregnancy Outcome

    PubMed Central

    David, M.; Kröncke, T.

    2013-01-01

    The current standard therapy to treat myomas in women wishing to have children consists of minimally invasive surgical myomectomy. Uterine artery embolisation (UAE) has also been discussed as another minimally invasive treatment option to treat myomas. This review evaluates the literature of the past 10 years on fibroid embolisation and its impact on fertility and pregnancy. Potential problems associated with UAE such as radiation exposure of the ovaries, impairment of ovarian function and the impact on pregnancy and child birth are discussed in detail. Previously published reports of at least 337 pregnancies after UAE were evaluated. The review concludes that UAE to treat myomas can only be recommended in women with fertility problems due to myomas who refuse surgery or women with an unacceptably high surgical risk, because the evaluated case reports and studies show that UAE significantly increases the risk of spontaneous abortion; there is also evidence of pathologically increased levels for other obstetric outcome parameters. There are still very few prospective studies which provide sufficient evidence for a definitive statement on the impact of UAE therapy on fertility rates and pregnancy outcomes. PMID:26633901

  20. Early Universe Higgs dynamics in the presence of the Higgs-inflaton and non-minimal Higgs-gravity couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ema, Yohei; Karčiauskas, Mindaugas; Lebedev, Oleg

    Apparent metastability of the electroweak vacuum poses a number of cosmological questions. These concern evolution of the Higgs field to the current vacuum, and its stability during and after inflation. Higgs-inflaton and non-minimal Higgs-gravity interactions can make a crucial impact on these considerations potentially solving the problems. In this work, we allow for these couplings to be present simultaneously and study their interplay. We find that different combinations of the Higgs-inflaton and non-minimal Higgs-gravity couplings induce effective Higgs mass during and after inflation. This crucially affects the Higgs stability considerations during preheating. In particular, a wide range of the couplingsmore » leading to stable solutions becomes allowed.« less

  1. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  2. Topology and layout optimization of discrete and continuum structures

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Kikuchi, Noboru

    1993-01-01

    The basic features of the ground structure method for truss structure an continuum problems are described. Problems with a large number of potential structural elements are considered using the compliance of the structure as the objective function. The design problem is the minimization of compliance for a given structural weight, and the design variables for truss problems are the cross-sectional areas of the individual truss members, while for continuum problems they are the variable densities of material in each of the elements of the FEM discretization. It is shown how homogenization theory can be applied to provide a relation between material density and the effective material properties of a periodic medium with a known microstructure of material and voids.

  3. Aircraft engine sump-fire studies

    NASA Technical Reports Server (NTRS)

    Loomis, W. R.

    1976-01-01

    Results of ongoing experimental studies are reported in which a 125-millimeter-diameter-advanced-bearing test rig simulating an engine sump is being used to find the critical range of conditions for fires to occur. Design, material, and operating concepts and techniques are being studied with the objective of minimizing the problem. It has been found that the vapor temperature near a spark ignitor is most important in determining ignition potential. At temperatures producing oil vapor pressures below or much above the calculated flammability limits, fires have not been ignited. But fires have been routinely started within the theoretical flammability range. This indicates that generalizing the sump-fire problem may make it amenable to analysis, with the potential for realistic solutions.

  4. On Born's Conjecture about Optimal Distribution of Charges for an Infinite Ionic Crystal

    NASA Astrophysics Data System (ADS)

    Bétermin, Laurent; Knüpfer, Hans

    2018-04-01

    We study the problem for the optimal charge distribution on the sites of a fixed Bravais lattice. In particular, we prove Born's conjecture about the optimality of the rock salt alternate distribution of charges on a cubic lattice (and more generally on a d-dimensional orthorhombic lattice). Furthermore, we study this problem on the two-dimensional triangular lattice and we prove the optimality of a two-component honeycomb distribution of charges. The results hold for a class of completely monotone interaction potentials which includes Coulomb-type interactions for d≥3 . In a more general setting, we derive a connection between the optimal charge problem and a minimization problem for the translated lattice theta function.

  5. Observational constraints on tachyonic chameleon dark energy model

    NASA Astrophysics Data System (ADS)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  6. The Effectiveness of Bicitra as a Preoperative Antacid

    DTIC Science & Technology

    1989-07-01

    of a safe and effective anesthetic is a common goal in anesthesia practice. Patient safety has always been and will continue to be in the forefront of...therefore is to administer an anesthetic with the understanding of its potential harm to the patient. Although there are many potential problems that...may develop in the care of the surgical patient, the focus of this paper and the research that was done was to determine what can be done to minimize

  7. The minimal axion minimal linear σ model

    NASA Astrophysics Data System (ADS)

    Merlo, L.; Pobbe, F.; Rigolin, S.

    2018-05-01

    The minimal SO(5) / SO(4) linear σ model is extended including an additional complex scalar field, singlet under the global SO(5) and the Standard Model gauge symmetries. The presence of this scalar field creates the conditions to generate an axion à la KSVZ, providing a solution to the strong CP problem, or an axion-like-particle. Different choices for the PQ charges are possible and lead to physically distinct Lagrangians. The internal consistency of each model necessarily requires the study of the scalar potential describing the SO(5)→ SO(4), electroweak and PQ symmetry breaking. A single minimal scenario is identified and the associated scalar potential is minimised including counterterms needed to ensure one-loop renormalizability. In the allowed parameter space, phenomenological features of the scalar degrees of freedom, of the exotic fermions and of the axion are illustrated. Two distinct possibilities for the axion arise: either it is a QCD axion with an associated scale larger than ˜ 105 TeV and therefore falling in the category of the invisible axions; or it is a more massive axion-like-particle, such as a 1 GeV axion with an associated scale of ˜ 200 TeV, that may show up in collider searches.

  8. Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Lopez, Nicolas

    This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.

  9. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  10. Disciplinary Style and Child Abuse Potential: Association with Indicators of Positive Functioning in Children with Behavior Problems

    ERIC Educational Resources Information Center

    Rodriguez, Christina M.; Eden, Ann M.

    2008-01-01

    Reduction of ineffective parenting is promoted in parent training components of mental health treatment for children with externalizing behavior disorders, but minimal research has considered whether disciplinary style and lower abuse risk could also be associated with positive functioning in such children. The present study examined whether lower…

  11. Structuring group medical practices: tax planning aspects.

    PubMed

    Gassman, A S; Conetta, T F

    1992-01-01

    This article is the first in a series addressing the structuring of group medical practice entities, shareholder relationships, and general representation factors. In this article, a general background in federal tax planning is provided, including strategies for minimization of income tax payment and the potential problems that may be encountered when a group practice is not carefully structured.

  12. Action-minimizing solutions of the one-dimensional N-body problem

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Zhang, Shiqing

    2018-05-01

    We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.

  13. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  14. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  15. Hip Arthroscopy: Common Problems and Solutions.

    PubMed

    Casp, Aaron; Gwathmey, Frank Winston

    2018-04-01

    The use of hip arthroscopy continues to expand. Understanding potential pitfalls and complications associated with hip arthroscopy is paramount to optimizing clinical outcomes and minimizing unfavorable results. Potential pitfalls and complications are associated with preoperative factors such as patient selection, intraoperative factors such as iatrogenic damage, traction-related complications, inadequate correction of deformity, and nerve injury, or postoperative factors such as poor rehabilitation. This article outlines common factors that contribute to less-than-favorable outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  17. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  18. A perverse quality incentive in surgery: implications of reimbursing surgeons less for doing laparoscopic surgery.

    PubMed

    Fader, Amanda N; Xu, Tim; Dunkin, Brian J; Makary, Martin A

    2016-11-01

    Surgery is one of the highest priced services in health care, and complications from surgery can be serious and costly. Recently, advances in surgical techniques have allowed surgeons to perform many common operations using minimally invasive methods that result in fewer complications. Despite this, the rates of open surgery remain high across multiple surgical disciplines. This is an expert commentary and review of the contemporary literature regarding minimally invasive surgery practices nationwide, the benefits of less invasive approaches, and how minimally invasive compared with open procedures are differentially reimbursed in the United States. We explore the incentive of the current surgeon reimbursement fee schedule and its potential implications. A surgeon's preference to perform minimally invasive compared with open surgery remains highly variable in the U.S., even after adjustment for patient comorbidities and surgical complexity. Nationwide administrative claims data across several surgical disciplines demonstrates that minimally invasive surgery utilization in place of open surgery is associated with reduced adverse events and cost savings. Reducing surgical complications by increasing adoption of minimally invasive operations has significant cost implications for health care. However, current U.S. payment structures may perversely incentivize open surgery and financially reward physicians who do not necessarily embrace newer or best minimally invasive surgery practices. Utilization of minimally invasive surgery varies considerably in the U.S., representing one of the greatest disparities in health care. Existing physician payment models must translate the growing body of research in surgical care into physician-level rewards for quality, including choice of operation. Promoting safe surgery should be an important component of a strong, value-based healthcare system. Resolving the potentially perverse incentives in paying for surgical approaches may help address disparities in surgical care, reduce the prevalent problem of variation, and help contain health care costs.

  19. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  20. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  1. Non-localization of eigenfunctions for Sturm-Liouville operators and applications

    NASA Astrophysics Data System (ADS)

    Liard, Thibault; Lissy, Pierre; Privat, Yannick

    2018-02-01

    In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.

  2. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  3. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  4. Pathogenic psychrotolerant sporeformers: an emerging challenge for low-temperature storage of minimally processed foods.

    PubMed

    Markland, Sarah M; Farkas, Daniel F; Kniel, Kalmia E; Hoover, Dallas G

    2013-05-01

    Sporeforming bacteria are a significant problem in the food industry as they are ubiquitous in nature and capable of resisting inactivation by heat and chemical treatments designed to inactivate them. Beyond spoilage issues, psychrotolerant sporeformers are becoming increasingly recognized as a potential hazard given the ever-expanding demand for refrigerated processed foods with extended shelf-life. In these products, the sporeforming pathogens of concern are Bacillus cereus, Bacillus weihenstephanensis, and Clostridium botulinum type E. This review article examines the foods, conditions, and organisms responsible for the food safety issue caused by the germination and outgrowth of psychrotolerant sporeforming pathogens in minimally processed refrigerated foods.

  5. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  6. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  8. A dimension-wise analysis method for the structural-acoustic system with interval parameters

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong

    2017-04-01

    The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.

  9. A minimal dissipation type-based classification in irreversible thermodynamics and microeconomics

    NASA Astrophysics Data System (ADS)

    Tsirlin, A. M.; Kazakov, V.; Kolinko, N. A.

    2003-10-01

    We formulate the problem of finding classes of kinetic dependencies in irreversible thermodynamic and microeconomic systems for which minimal dissipation processes belong to the same type. We show that this problem is an inverse optimal control problem and solve it. The commonality of this problem in irreversible thermodynamics and microeconomics is emphasized.

  10. Minimization In Digital Design As A Meta-Planning Problem

    NASA Astrophysics Data System (ADS)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  11. Minimizing calibration time using inter-subject information of single-trial recognition of error potentials in brain-computer interfaces.

    PubMed

    Iturrate, Iñaki; Montesano, Luis; Chavarriaga, Ricardo; del R Millán, Jose; Minguez, Javier

    2011-01-01

    One of the main problems of both synchronous and asynchronous EEG-based BCIs is the need of an initial calibration phase before the system can be used. This phase is necessary due to the high non-stationarity of the EEG, since it changes between sessions and users. The calibration process limits the BCI systems to scenarios where the outputs are very controlled, and makes these systems non-friendly and exhausting for the users. Although it has been studied how to reduce calibration time for asynchronous signals, it is still an open issue for event-related potentials. Here, we propose the minimization of the calibration time on single-trial error potentials by using classifiers based on inter-subject information. The results show that it is possible to have a classifier with a high performance from the beginning of the experiment, and which is able to adapt itself making the calibration phase shorter and transparent to the user.

  12. Closing in on the constitution of consciousness

    PubMed Central

    Miller, Steven M.

    2014-01-01

    The science of consciousness is a nascent and thriving field of research that is founded on identifying the minimally sufficient neural correlates of consciousness. However, I have argued that it is the neural constitution of consciousness that science seeks to understand and that there are no evident strategies for distinguishing the correlates and constitution of (phenomenal) consciousness. Here I review this correlation/constitution distinction problem and challenge the existing foundations of consciousness science. I present the main analyses from a longer paper in press on this issue, focusing on recording, inhibition, stimulation, and combined inhibition/stimulation strategies, including proposal of the Jenga analogy to illustrate why identifying the minimally sufficient neural correlates of consciousness should not be considered the ultimate target of consciousness science. Thereafter I suggest that while combined inhibition and stimulation strategies might identify some constitutive neural activities—indeed minimally sufficient constitutive neural activities—such strategies fail to identify the whole neural constitution of consciousness and thus the correlation/constitution distinction problem is not fully solved. Various clarifications, potential objections and related scientific and philosophical issues are also discussed and I conclude by proposing new foundational claims for consciousness science. PMID:25452738

  13. The empty OR-process analysis and a new concept for flexible and modular use in minimal invasive surgery.

    PubMed

    Eckmann, Christian; Olbrich, Guenter; Shekarriz, Hodjat; Bruch, Hans-Peter

    2003-01-01

    The reproducible advantages of minimal invasive surgery have led to a worldwide spread of these techniques. Nevertheless, the increasing use of technology causes problems in the operating room (OR). The workstation environment and workflow are handicapped by a great number of isolated solutions that demand a large amount of space. The Center of Excellence in Medical Technology (CEMET) was established in 2001 as an institution for a close cooperation between users, science, and manufacturers of medical devices in the State of Schleswig-Holstein, Germany. The future OR, as a major project, began with a detailed process analysis, which disclosed a large number of medical devices with different interfaces and poor standardisation as main problems. Smaller and more flexible devices are necessary, as well as functional modules located outside the OR. Only actuators should be positioned near the operation area. The future OR should include a flexible-room concept and less equipment than is in use currently. A uniform human-user interface is needed to control the OR environment. This article addresses the need for a clear workspace environment, intelligent-user interfaces, and flexible-room concept to improve the potentials in use of minimal invasive surgery.

  14. Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration

    NASA Astrophysics Data System (ADS)

    Kovtunenko, V. A.

    2006-10-01

    Based on a level-set description of a crack moving with a given velocity, the problem of shape perturb-ation of the crack is considered. Nonpenetration conditions are imposed between opposite crack surfaces which result in a constrained minimization problem describing equilibrium of a solid with the crack. We suggest a minimax formulation of the state problem thus allowing curvilinear (nonplanar) cracks for the consideration. Utilizing primal-dual methods of shape sensitivity analysis we obtain the general formula for a shape derivative of the potential energy, which describes an energy-release rate for the curvilinear cracks. The conditions sufficient to rewrite it in the form of a path-independent integral (J-integral) are derived.

  15. Symmetric Trajectories for the 2N-Body Problem with Equal Masses

    NASA Astrophysics Data System (ADS)

    Terracini, Susanna; Venturelli, Andrea

    2007-06-01

    We consider the problem of 2 N bodies of equal masses in mathbb{R}^3 for the Newtonian-like weak-force potential r -σ, and we prove the existence of a family of collision-free nonplanar and nonhomographic symmetric solutions that are periodic modulo rotations. In addition, the rotation number with respect to the vertical axis ranges in a suitable interval. These solutions have the hip-hop symmetry, a generalization of that introduced in [19], for the case of many bodies and taking account of a topological constraint. The argument exploits the variational structure of the problem, and is based on the minimization of Lagrangian action on a given class of paths.

  16. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  17. Quality of Life after Open or Minimally Invasive Esophagectomy in Patients With Esophageal Cancer-A Systematic Review.

    PubMed

    Taioli, Emanuela; Schwartz, Rebecca M; Lieberman-Cribbin, Wil; Moskowitz, Gil; van Gerwen, Maaike; Flores, Raja

    2017-01-01

    Although esophageal cancer is rare in the United States, 5-year survival and quality of life (QoL) are poor following esophageal cancer surgery. Although esophageal cancer has been surgically treated with esophagectomy through thoracotomy, an open procedure, minimally invasive surgical procedures have been recently introduced to decrease the risk of complications and improve QoL after surgery. The current study is a systematic review of the published literature to assess differences in QoL after traditional (open) or minimally invasive esophagectomy. We hypothesized that QoL is consistently better in patients treated with minimally invasive surgery than in those treated with a more traditional and invasive approach. Although global health, social function, and emotional function improved more commonly after minimally invasive surgery compared with open surgery, physical function and role function, as well as symptoms including choking, dysphagia, eating problems, and trouble swallowing saliva, declined for both surgery types. Cognitive function was equivocal across both groups. The potential small benefits in global and mental health status among those who experience minimally invasive surgery should be considered with caution given the possibility of publication and selection bias. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Disciplinary style and child abuse potential: association with indicators of positive functioning in children with behavior problems.

    PubMed

    Rodriguez, Christina M; Eden, Ann M

    2008-06-01

    Reduction of ineffective parenting is promoted in parent training components of mental health treatment for children with externalizing behavior disorders, but minimal research has considered whether disciplinary style and lower abuse risk could also be associated with positive functioning in such children. The present study examined whether lower dysfunctional disciplinary style and child abuse risk was associated with children's positive self-concept, adaptive attributional style, and hopefulness. Recruited from children undergoing treatment for disruptive behavior disorders, 69 mother-child dyads participated, with maternal caregivers reporting on their disciplinary style and abuse potential and children reporting independently on their positive functioning (adaptive attributional style, overall self-concept, and hopelessness). Findings supported the hypothesized association, with lower scores on mothers' dysfunctional discipline style and abuse potential significantly predicting children's reported positive functioning. Future research directions pertaining to more adaptive functioning in children with behavior problems are discussed.

  19. Survey of Collision Avoidance and Ranging Sensors for Mobile Robots.

    DTIC Science & Technology

    1988-03-01

    systems represent a potential safety problem in that the intense and often invisible beam can be an eye hazard. Furthermore, gas lasers require high ...sensor, or out of range. Conventional diffuse proximity detectors based on return signal intensity display high repeatability only when target...because the low transmission intensity of this infrared wavelength results in minimal return radiation. (The extremely cold detector produces a high

  20. Experiments to Generate New Data about School Choice: Commentary on "Defining Continuous Improvement and Cost Minimization Possibilities through School Choice Experiments" and Merrifield's Reply

    ERIC Educational Resources Information Center

    Berg, Nathan; Merrifield, John

    2009-01-01

    Benefiting from new data provided by experimental economists, behavioral economics is now moving beyond empirical tests of standard behavioral assumptions to the problem of designing improved institutions that are tuned to fit real-world behavior. It is therefore worthwhile to consider the potential for new experiments to advance school choice…

  1. NLEAP/GIS approach for identifying and mitigating regional nitrate-nitrogen leaching

    USGS Publications Warehouse

    Shaffer, M.J.; Hall, M.D.; Wylie, B.K.; Wagner, D.G.; Corwin, D.L.; Loague, K.

    1996-01-01

    Improved simulation-based methodology is needed to help identify broad geographical areas where potential NO3-N leaching may be occurring from agriculture and suggest management alternatives that minimize the problem. The Nitrate Leaching and Economic Analysis Package (NLEAP) model was applied to estimate regional NO3-N leaching in eastern Colorado. Results show that a combined NLEAP/GIS technology can be used to identify potential NO3-N hot spots in shallow alluvial aquifers under irrigated agriculture. The NLEAP NO3-N Leached (NL) index provided the most promising single index followed by NO3-N Available for Leaching (NAL). The same combined technology also shows promise in identifying Best Management Practice (BMP) methods that help minimize NO3-N leaching in vulnerable areas. Future plans call for linkage of the NLEAP/GIS procedures with groundwater modeling to establish a mechanistic analysis of agriculture-aquifer interactions at a regional scale.

  2. Quaternary ammonium silane-functionalized, methacrylate resin composition with antimicrobial activities and self-repair potential

    PubMed Central

    Gong, Shi-qiang; Niu, Li-na; Kemp, Lisa K.; Yiu, Cynthia K.Y.; Ryou, Heonjune; Qi, Yi-pin; Blizzard, John D.; Nikonov, Sergey; Brackett, Martha G.; Messer, Regina L.W.; Wu, Christine D.; Mao, Jing; Brister, L. Bryan; Rueggeberg, Frederick A.; Arola, Dwayne D.; Pashley, David H.; Tay, Franklin R.

    2012-01-01

    Design of antimicrobial polymers for enhancing healthcare issues and minimizing environmental problems is an important endeavor with both fundamental and practical implications. Quaternary ammonium silane-functionalized methacrylate (QAMS) represents an example of antimicrobial macromonomers synthesized by a sol-gel chemical route; these compounds possess flexible Si-O-Si bonds. In present work, a partially-hydrolyzed QAMS copolymerized with bis-GMA is introduced. This methacrylate resin was shown to possess desirable mechanical properties with both a high degree of conversion and minimal polymerization shrinkage. Kill-on-contact microbiocidal activities of this resin were demonstrated using single-species biofilms of Streptococcus mutans (ATCC 36558), Actinomyces naeslundii (ATCC 12104) and Candida albicans (ATCC 90028). Improved mechanical properties after hydration provided the proof-of-concept that QAMS-incorporated resin exhibits self-repair potential via water-induced condensation of organic modified silicate (ormosil) phases within the polymerized resin matrix. PMID:22659173

  3. Radiant coolers - Theory, flight histories, design comparisons and future applications

    NASA Technical Reports Server (NTRS)

    Donohoe, M. J.; Sherman, A.; Hickman, D. E.

    1975-01-01

    Radiant coolers have been developed for application to the cooling of infrared detectors aboard NASA earth observation systems and as part of the Defense Meteorological Satellite Program. The prime design constraints for these coolers are the location of the cooler aboard the satellite and the satellite orbit. Flight data from several coolers indicates that, in general, design temperatures are achieved. However, potential problems relative to the contamination of cold surfaces are also revealed by the data. A comparison among the various cooler designs and flight performances indicates design improvements that can minimize the contamination problem in the future.

  4. Prospects for mirage mediation

    NASA Astrophysics Data System (ADS)

    Pierce, Aaron; Thaler, Jesse

    2006-09-01

    Mirage mediation reduces the fine-tuning in the minimal supersymmetric standard model by dynamically arranging a cancellation between anomaly-mediated and modulus-mediated supersymmetry breaking. We explore the conditions under which a mirage ``messenger scale'' is generated near the weak scale and the little hierarchy problem is solved. We do this by explicitly including the dynamics of the SUSY-breaking sector needed to cancel the cosmological constant. The most plausible scenario for generating a low mirage scale does not readily admit an extra-dimensional interpretation. We also review the possibilities for solving the μ/Bμ problem in such theories, a potential hidden source of fine-tuning.

  5. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  6. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  7. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  8. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  9. Theory and practice in the electrometric determination of pH in precipitation

    NASA Astrophysics Data System (ADS)

    Brennan, Carla Jo; Peden, Mark E.

    Basic theory and laboratory investigations have been applied to the electrometric determination of pH in precipitation samples in an effort to improve the reliability of the results obtained from these low ionic strength samples. The theoretical problems inherent in the measurement of pH in rain have been examined using natural precipitation samples with varying ionic strengths and pH values. The importance of electrode design and construction has been stressed. The proper choice of electrode can minimize or eliminate problems arising from residual liquid junction potentials, streaming potentials and temperature differences. Reliable pH measurements can be made in precipitation samples using commercially available calibration buffers providing low ionic strength quality control solutions are routinely used to verify electrode and meter performance.

  10. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  11. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  12. A Novel Approach for High Deposition Rate Cladding with Minimal Dilution with an Arc - Laser Process Combination

    NASA Astrophysics Data System (ADS)

    Barroi, A.; Hermsdorf, J.; Prank, U.; Kaierle, S.

    First results of the process development of a novel approach for a high deposition rate cladding process with minimal dilution are presented. The approach will combine the enormous melting potential of an electrical arc that burns between two consumable wire electrodes with the precision of a laser process. Separate test for the plasma melting and for the laser based surface heating have been performed. A steadily burning arc between the electrodes could be established and a deposition rate of 10 kg/h could be achieved. The laser was able to apply the desired heat profile, needed for the combination of the processes. Process problems were analyzed and solutions proposed.

  13. A mathematical model for the deformation of the eyeball by an elastic band.

    PubMed

    Keeling, Stephen L; Propst, Georg; Stadler, Georg; Wackernagel, Werner

    2009-06-01

    In a certain kind of eye surgery, the human eyeball is deformed sustainably by the application of an elastic band. This article presents a mathematical model for the mechanics of the combined eye/band structure along with an algorithm to compute the model solutions. These predict the immediate and the lasting indentation of the eyeball. The model is derived from basic physical principles by minimizing a potential energy subject to a volume constraint. Assuming spherical symmetry, this leads to a two-point boundary-value problem for a non-linear second-order ordinary differential equation that describes the minimizing static equilibrium. By comparison with laboratory data, a preliminary validation of the model is given.

  14. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  15. The environmental cost of subsistence: Optimizing diets to minimize footprints.

    PubMed

    Gephart, Jessica A; Davis, Kyle F; Emery, Kyle A; Leach, Allison M; Galloway, James N; Pace, Michael L

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study provides a useful tool for decision-makers in linking human nutrition and environmental impacts. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Minimization of Roll Firings for Optimal Propellant Maneuvers

    NASA Astrophysics Data System (ADS)

    Leach, Parker C.

    Attitude control of the International Space Station (ISS) is critical for operations, impacting power, communications, and thermal systems. The station uses gyroscopes and thrusters for attitude control, and reorientations are normally assisted by thrusters on docked vehicles. When the docked vehicles are unavailable, the reduction in control authority in the roll axis results in frequent jet firings and massive fuel consumption. To improve this situation, new guidance and control schemes are desired that provide control with fewer roll firings. Optimal control software was utilized to solve for potential candidates that satisfied desired conditions with the goal of minimizing total propellant. An ISS simulation too was then used to test these solutions for feasibility. After several problem reformulations, multiple candidate solutions minimizing or completely eliminating roll firings were found. Flight implementation would not only save massive amounts of fuel and thus money, but also reduce ISS wear and tear, thereby extending its lifetime.

  17. Life cycle optimization model for integrated cogeneration and energy systems applications in buildings

    NASA Astrophysics Data System (ADS)

    Osman, Ayat E.

    Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.

  18. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.

  19. Final Report - High-Order Spectral Volume Method for the Navier-Stokes Equations On Unstructured Tetrahedral Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z J

    2012-12-06

    The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less

  20. Using the critical incident technique to define a minimal data set for requirements elicitation in public health.

    PubMed

    Olvingson, Christina; Hallberg, Niklas; Timpka, Toomas; Greenes, Robert A

    2002-12-18

    The introduction of computer-based information systems (ISs) in public health provides enhanced possibilities for service improvements and hence also for improvement of the population's health. Not least, new communication systems can help in the socialization and integration process needed between the different professions and geographical regions. Therefore, development of ISs that truly support public health practices require that technical, cognitive, and social issues be taken into consideration. A notable problem is to capture 'voices' of all potential users, i.e., the viewpoints of different public health practitioners. Failing to capture these voices will result in inefficient or even useless systems. The aim of this study is to develop a minimal data set for capturing users' voices on problems experienced by public health professionals in their daily work and opinions about how these problems can be solved. The issues of concern thus captured can be used both as the basis for formulating the requirements of ISs for public health professionals and to create an understanding of the use context. Further, the data can help in directing the design to the features most important for the users.

  1. Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers

    NASA Astrophysics Data System (ADS)

    Wahl, Theodore P.

    A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.

  2. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  3. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  4. Hydrogen generation through static-feed water electrolysis

    NASA Technical Reports Server (NTRS)

    Jensen, F. C.; Schubert, F. H.

    1975-01-01

    A static-feed water electrolysis system (SFWES), developed under NASA sponsorship, is presented for potential applicability to terrestrial hydrogen production. The SFWES concept uses (1) an alkaline electrolyte to minimize power requirements and materials-compatibility problems, (2) a method where the electrolyte is retained in a thin porous matrix eliminating bulk electrolyte, and (3) a static water-feed mechanism to prevent electrode and electrolyte contamination and to promote system simplicity.

  5. Use of edible coatings to preserve quality of lightly (and slightly) processed products.

    PubMed

    Baldwin, E A; Nisperos-Carriedo, M O; Baker, R A

    1995-11-01

    Lightly processed agricultural products present a special problem to the food industry and to scientists involved in postharvest and food technology research. Light or minimal processing includes cutting, slicing, coring, peeling, trimming, or sectioning of agricultural produce. These products have an active metabolism that can result in deteriorative changes, such as increased respiration and ethylene production. If not controlled, these changes can lead to rapid senescence and general deterioration of the product. In addition, the surface water activity of cut fruits and vegetables is generally quite high, inviting microbial attack, which further reduces product stability. Methods for control of these changes are numerous and can include the use of edible coatings. Also mentioned in this review are coating of nut products, and dried, dehydrated, and freeze-dried fruits. Technically, these are not considered to be minimally processed, but many of the problems and benefits of coating these products are similar to coating lightly processed products. Generally, the potential benefits of edible coatings for processed or lightly processed produce is to stabilize the product and thereby extend product shelf life. More specifically, coatings have the potential to reduce moisture loss, restrict oxygen entrance, lower respiration, retard ethylene production, seal in flavor volatiles, and carry additives that retard discoloration and microbial growth.

  6. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  7. Problem of quality assurance during metal constructions welding via robotic technological complexes

    NASA Astrophysics Data System (ADS)

    Fominykh, D. S.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The problem of minimizing the probability for critical combinations of events that lead to a loss in welding quality via robotic process automation is examined. The problem is formulated, models and algorithms for its solution are developed. The problem is solved by minimizing the criterion characterizing the losses caused by defective products. Solving the problem may enhance the quality and accuracy of operations performed and reduce the losses caused by defective product

  8. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  9. [siRNAs with high specificity to the target: a systematic design by CRM algorithm].

    PubMed

    Alsheddi, T; Vasin, L; Meduri, R; Randhawa, M; Glazko, G; Baranova, A

    2008-01-01

    'Off-target' silencing effect hinders the development of siRNA-based therapeutic and research applications. Common solution to this problem is an employment of the BLAST that may miss significant alignments or an exhaustive Smith-Waterman algorithm that is very time-consuming. We have developed a Comprehensive Redundancy Minimizer (CRM) approach for mapping all unique sequences ("targets") 9-to-15 nt in size within large sets of sequences (e.g. transcriptomes). CRM outputs a list of potential siRNA candidates for every transcript of the particular species. These candidates could be further analyzed by traditional "set-of-rules" types of siRNA designing tools. For human, 91% of transcripts are covered by candidate siRNAs with kernel targets of N = 15. We tested our approach on the collection of previously described experimentally assessed siRNAs and found that the correlation between efficacy and presence in CRM-approved set is significant (r = 0.215, p-value = 0.0001). An interactive database that contains a precompiled set of all human siRNA candidates with minimized redundancy is available at http://129.174.194.243. Application of the CRM-based filtering minimizes potential "off-target" silencing effects and could improve routine siRNA applications.

  10. Nonlinear transient analysis by energy minimization: A theoretical basis for the ACTION computer code. [predicting the response of a lightweight aircraft during a crash

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1980-01-01

    The formulation basis for establishing the static or dynamic equilibrium configurations of finite element models of structures which may behave in the nonlinear range are provided. With both geometric and time independent material nonlinearities included, the development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. Representations of a rigid link and an impenetrable contact plane are added to the deformation model so that any number of nodes of the finite element model may be connected by a rigid link or may contact the plane. Equilibrium configurations are derived as the stationary conditions of a potential function of the generalized nodal variables of the model. Minimization of the nonlinear potential function is achieved by using the best current variable metric update formula for use in unconstrained minimization. Powell's conjugate gradient algorithm, which offers very low storage requirements at some slight increase in the total number of calculations, is the other alternative algorithm to be used for extremely large scale problems.

  11. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  12. Aesthetic perception and its minimal content: a naturalistic perspective

    PubMed Central

    Xenakis, Ioannis; Arnellos, Argyris

    2014-01-01

    Aesthetic perception is one of the most interesting topics for philosophers and scientists who investigate how it influences our interactions with objects and states of affairs. Over the last few years, several studies have attempted to determine “how aesthetics is represented in an object,” and how a specific feature of an object could evoke the respective feelings during perception. Despite the vast number of approaches and models, we believe that these explanations do not resolve the problem concerning the conditions under which aesthetic perception occurs, and what constitutes the content of these perceptions. Adopting a naturalistic perspective, we here view aesthetic perception as a normative process that enables agents to enhance their interactions with physical and socio-cultural environments. Considering perception as an anticipatory and preparatory process of detection and evaluation of indications of potential interactions (what we call “interactive affordances”), we argue that the minimal content of aesthetic perception is an emotionally valued indication of interaction potentiality. Aesthetic perception allows an agent to normatively anticipate interaction potentialities, thus increasing sense making and reducing the uncertainty of interaction. This conception of aesthetic perception is compatible with contemporary evidence from neuroscience, experimental aesthetics, and interaction design. The proposed model overcomes several problems of transcendental, art-centered, and objective aesthetics as it offers an alternative to the idea of aesthetic objects that carry inherent values by explaining “the aesthetic” as emergent in perception within a context of uncertain interaction. PMID:25285084

  13. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  14. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  15. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  16. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  17. The Difficult Stoma: Challenges and Strategies

    PubMed Central

    Strong, Scott A.

    2016-01-01

    The problems that a patient experiences after the creation of a temporary or permanent stoma can result from many factors, but a carefully constructed stoma located in an ideal location is typically associated with appropriate function and an acceptable quality of life. The construction of the stoma can be confounded by many concomitant conditions that increase the distance that the bowel must traverse or shorten the bowel's capacity to reach. Stomas can be further troubled by a variety of problems that potentially arise early in the recovery period or months later. Surgeons must be familiar with these obstacles and complications to avoid their occurrence and minimize their impact. PMID:27247541

  18. In-line Kevlar filters for microfiltration of transuranic-containing liquid streams.

    PubMed

    Gonzales, G J; Beddingfield, D H; Lieberman, J L; Curtis, J M; Ficklin, A C

    1992-06-01

    The Department of Energy Rocky Flats Plant has numerous ongoing efforts to minimize the generation of residue and waste and to improve safety and health. Spent polypropylene liquid filters held for plutonium recovery, known as "residue," or as transuranic mixed waste contribute to storage capacity problems and create radiation safety and health considerations. An in-line process-liquid filter made of Kevlar polymer fiber has been evaluated for its potential to: (1) minimize filter residue, (2) recover economically viable quantities of plutonium, (3) minimize liquid storage tank and process-stream radioactivity, and (4) reduce potential personnel radiation exposure associated with these sources. Kevlar filters were rated to less than or equal to 1 mu nominal filtration and are capable of reducing undissolved plutonium particles to more than 10 times below the economic discard limit, however produced high back-pressures and are not yet acid resistant. Kevlar filters performed independent of loaded particles serving as a sieve. Polypropylene filters removed molybdenum particles at efficiencies equal to Kevlar filters only after loading molybdenum during recirculation events. Kevlars' high-efficiency microfiltration of process-liquid streams for the removal of actinides has the potential to reduce personnel radiation exposure by a factor of 6 or greater, while simultaneously achieving a reduction in the generation of filter residue and waste by a factor of 7. Insoluble plutonium may be recoverable from Kevlar filters by incineration.

  19. Control algorithms for dynamic attenuators.

    PubMed

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  20. Gravitational forces and moments on spacecraft

    NASA Technical Reports Server (NTRS)

    Kane, T. R.; Likins, P. W.

    1975-01-01

    The solution of problems of attitude dynamics of spacecraft and the influence of gravitational forces and moments is examined. Arguments are presented based on Newton's law of gravitation, and employing the methods of Newtonian (vectorial) mechanics, with minimal recourse to the classical concepts of potential theory. The necessary ideas were developed and relationships were established to permit the representation of gravitational forces and moments exerted on bodies in space by other bodies, both in terms involving the mass distribution properties of the bodies, and in terms of vector operations on those scalar functions classically described as gravitational potential functions.

  1. Groundstates of the Choquard equations with a sign-changing self-interaction potential

    NASA Astrophysics Data System (ADS)

    Battaglia, Luca; Van Schaftingen, Jean

    2018-06-01

    We consider a nonlinear Choquard equation -Δ u+u= (V * |u|^p )|u|^{p-2}u \\qquad {in }{R}^N, when the self-interaction potential V is unbounded from below. Under some assumptions on V and on p, covering p =2 and V being the one- or two-dimensional Newton kernel, we prove the existence of a nontrivial groundstate solution u\\in H^1 (R^N){\\setminus }{0} by solving a relaxed problem by a constrained minimization and then proving the convergence of the relaxed solutions to a groundstate of the original equation.

  2. Flow Past a Descending Balloon

    NASA Technical Reports Server (NTRS)

    Baginski, Frank

    2001-01-01

    In this report, we present our findings related to aerodynamic loading of partially inflated balloon shapes. This report will consider aerodynamic loading of partially inflated inextensible natural shape balloons and some relevant problems in potential flow. For the axisymmetric modeling, we modified our Balloon Design Shape Program (BDSP) to handle axisymmetric inextensible ascent shapes with aerodynamic loading. For a few simple examples of two dimensional potential flows, we used the Matlab PDE Toolbox. In addition, we propose a model for aerodynamic loading of strained energy minimizing balloon shapes with lobes. Numerical solutions are presented for partially inflated strained balloon shapes with lobes and no aerodynamic loading.

  3. Hazards in the hospital.

    PubMed

    Seibert, P J

    1994-02-01

    In an earlier article (JAVMA, Jan 15, 1994), the author outlined some of the first steps necessary in establishing a hospital safety program that will comply with current Occupational Safety and Health Administration (OSHA) guidelines. One of the main concerns of the OSHA guidelines is that there be written plans for managing hazardous materials, performing dangerous jobs, and dealing with other potential safety problems. In this article, the author discusses potentially hazardous situations commonly found in veterinary practices and provides details on how to minimize the risks associated with those situations and how to implement safety procedures that will comply with the OSHA guidelines.

  4. A process for Decision-making after Pilot and feasibility Trials (ADePT): development following a feasibility study of a complex intervention for pelvic organ prolapse

    PubMed Central

    2013-01-01

    Background Current Medical Research Council (MRC) guidance on complex interventions advocates pilot trials and feasibility studies as part of a phased approach to the development, testing, and evaluation of healthcare interventions. In this paper we discuss the results of a recent feasibility study and pilot trial for a randomized controlled trial (RCT) of pelvic floor muscle training for prolapse (ClinicalTrials.gov: NCT01136889). The ways in which researchers decide to respond to the results of feasibility work may have significant repercussions for both the nature and degree of tension between internal and external validity in a definitive trial. Methods We used methodological issues to classify and analyze the problems that arose in the feasibility study. Four centers participated with the aim of randomizing 50 women. Women were eligible if they had prolapse of any type, of stage I to IV, and had a pessary successfully fitted. Postal questionnaires were administered at baseline, 6 months, and 7 months post-randomization. After identifying problems arising within the pilot study we then sought to locate potential solutions that might minimize the trade-off between a subsequent explanatory versus pragmatic trial. Results The feasibility study pointed to significant potential problems in relation to participant recruitment, features of the intervention, acceptability of the intervention to participants, and outcome measurement. Finding minimal evidence to support our decision-making regarding the transition from feasibility work to a trial, we developed a systematic process (A process for Decision-making after Pilot and feasibility Trials (ADePT)) which we subsequently used as a guide. The process sought to: 1) encourage the systematic identification and appraisal of problems and potential solutions; 2) improve the transparency of decision-making processes; and 3) reveal the tensions that exist between pragmatic and explanatory choices. Conclusions We have developed a process that may aid researchers in their attempt to identify the most appropriate solutions to problems identified within future pilot and feasibility RCTs. The process includes three key steps: a decision about the type of problem, the identification of all solutions (whether addressed within the intervention, trial design or clinical context), and a systematic appraisal of these solutions. PMID:24160371

  5. Toward Affordable Systems III: Portfolio Management for Army Engineering and Manufacturing Development Programs

    DTIC Science & Technology

    2012-02-01

    a public service of the RAND Corporation. CHILDREN AND FAMILIES EDUCATION AND THE ARTS ENERGY AND ENVIRONMENT HEALTH AND HEALTH CARE INFRASTRUCTURE...agencies use capability portfolio management to optimize capa- bility investments and minimize risk in meeting the DoD needs across the defense...broadly expose potential problem areas—that is, which requirements (or areas of demand) are at risk of not being met by that particular portfolio

  6. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  7. Automated Optimization of Potential Parameters

    PubMed Central

    Michele, Di Pierro; Ron, Elber

    2013-01-01

    An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115

  8. Orbit period modulation for relative motion using continuous low thrust in the two-body and restricted three-body problems

    NASA Astrophysics Data System (ADS)

    Arnot, C. S.; McInnes, C. R.; McKay, R. J.; Macdonald, M.; Biggs, J.

    2018-02-01

    This paper presents rich new families of relative orbits for spacecraft formation flight generated through the application of continuous thrust with only minimal intervention into the dynamics of the problem. Such simplicity facilitates implementation for small, low-cost spacecraft with only position state feedback, and yet permits interesting and novel relative orbits in both two- and three-body systems with potential future applications in space-based interferometry, hyperspectral sensing, and on-orbit inspection. Position feedback is used to modify the natural frequencies of the linearised relative dynamics through direct manipulation of the system eigenvalues, producing new families of stable relative orbits. Specifically, in the Hill-Clohessy-Wiltshire frame, simple adaptations of the linearised dynamics are used to produce a circular relative orbit, frequency-modulated out-of-plane motion, and a novel doubly periodic cylindrical relative trajectory for the purposes of on-orbit inspection. Within the circular restricted three-body problem, a similar minimal approach with position feedback is used to generate new families of stable, frequency-modulated relative orbits in the vicinity of a Lagrange point, culminating in the derivation of the gain requirements for synchronisation of the in-plane and out-of-plane frequencies to yield a singly periodic tilted elliptical relative orbit with potential use as a Lunar far-side communications relay. The Δ v requirements for the cylindrical relative orbit and singly periodic Lagrange point orbit are analysed, and it is shown that these requirements are modest and feasible for existing low-thrust propulsion technology.

  9. Potential effect of diaper and cotton ball contamination on NMR- and LC/MS-based metabonomics studies of urine from newborn babies.

    PubMed

    Goodpaster, Aaron M; Ramadas, Eshwar H; Kennedy, Michael A

    2011-02-01

    Nuclear magnetic resonance (NMR) and liquid chromatography/mass spectrometry (LC/MS) based metabonomics screening of urine has great potential for discovery of biomarkers for diseases that afflict newborn and preterm infants. However, urine collection from newborn infants presents a potential confounding problem due to the possibility that contaminants might leach from materials used for urine collection and influence statistical analysis of metabonomics data. In this manuscript, we have analyzed diaper and cotton ball contamination using synthetic urine to assess its potential to influence the outcome of NMR- and LC/MS-based metabonomics studies of human infant urine. Eight diaper brands were examined using the "diaper plus cotton ball" technique. Data were analyzed using conventional principal components analysis, as well as a statistical significance algorithm developed for, and applied to, NMR data. Results showed most diaper brands had distinct contaminant profiles that could potentially influence NMR- and LC/MS-based metabonomics studies. On the basis of this study, it is recommended that diaper and cotton ball brands be characterized using metabonomics methodologies prior to initiating a metabonomics study to ensure that contaminant profiles are minimal or manageable and that the same diaper and cotton ball brands be used throughout a study to minimize variation.

  10. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    NASA Astrophysics Data System (ADS)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see , articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  11. Wavelets in electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Modisette, Jason Perry

    1997-09-01

    Ab initio calculations of the electronic structure of bulk materials and large clusters are not possible on today's computers using current techniques. The storage and diagonalization of the Hamiltonian matrix are the limiting factors in both memory and execution time. The scaling of both quantities with problem size can be reduced by using approximate diagonalization or direct minimization of the total energy with respect to the density matrix in conjunction with a localized basis. Wavelet basis members are much more localized than conventional bases such as Gaussians or numerical atomic orbitals. This localization leads to sparse matrices of the operators that arise in SCF multi-electron calculations. We have investigated the construction of the one-electron Hamiltonian, and also the effective one- electron Hamiltonians that appear in density-functional and Hartree-Fock theories. We develop efficient methods for the generation of the kinetic energy and potential matrices, the Hartree and exchange potentials, and the local exchange-correlation potential of the LDA. Test calculations are performed on one-electron problems with a variety of potentials in one and three dimensions.

  12. Pastoral care of mental illness and the accommodation of African Christian beliefs and practices by UK clergy.

    PubMed

    Leavey, Gerard; Loewenthal, Kate; King, Michael

    2017-02-01

    Faith-based organisations, especially those related to specific ethnic or migrant groups, are increasingly viewed by secular Western government agencies as potential collaborators in community health and welfare programmes. Although clergy are often called upon to provide mental health pastoral care, their response to such problems remains relatively unexamined. This paper examines how clergy working in multiethnic settings do not always have the answers that people want, or perhaps need, to problems of misfortune and suffering. In the UK these barriers can be attributed, generally, to a lack of training on mental health problems and minimal collaboration with health services. The current paper attempts to highlight the dilemmas of the established churches' involvement in mental health care in the context of diversity. We explore the inability of established churches to accommodate African and other spiritual beliefs and practices related to the etiology and treatment of mental health problems.

  13. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGES

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  14. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  15. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  16. B-52G crew noise exposure study

    NASA Astrophysics Data System (ADS)

    Decker, W. H.; Nixon, C. W.

    1985-08-01

    The B-52G aircraft produces acoustic environments that are potentially hazardous, interfere with voice communications and may degrade task performance. Numerous reports from aircrew of high noise levels at crew location have been documented for those B-52G aircraft that have been modified with the Offensive Avionics System. To alleviate and minimize the excessive noise exposures of aircrews, a study of the noise problem in the b-52G was deemed necessary. First, in-flight noise measurements were obtained at key personnel locations on a B-52G during a typical training mission. Then, extensive laboratory analyses were conducted on these in-flight noise data. The resulting noise exposure data were evaluated in terms of the various segments of and the total flight profile relative to allowable noise exposures. Finally, recommendations were developed for short term and long term approaches toward potential improvement in the B-52G noise exposure problem.

  17. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  18. Non-Boolean computing with nanomagnets for computer vision applications

    NASA Astrophysics Data System (ADS)

    Bhanja, Sanjukta; Karunaratne, D. K.; Panchumarthy, Ravi; Rajaram, Srinath; Sarkar, Sudeep

    2016-02-01

    The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

  19. An algorithm for designing minimal microbial communities with desired metabolic capacities

    PubMed Central

    Eng, Alexander; Borenstein, Elhanan

    2016-01-01

    Motivation: Recent efforts to manipulate various microbial communities, such as fecal microbiota transplant and bioreactor systems’ optimization, suggest a promising route for microbial community engineering with numerous medical, environmental and industrial applications. However, such applications are currently restricted in scale and often rely on mimicking or enhancing natural communities, calling for the development of tools for designing synthetic communities with specific, tailored, desired metabolic capacities. Results: Here, we present a first step toward this goal, introducing a novel algorithm for identifying minimal sets of microbial species that collectively provide the enzymatic capacity required to synthesize a set of desired target product metabolites from a predefined set of available substrates. Our method integrates a graph theoretic representation of network flow with the set cover problem in an integer linear programming (ILP) framework to simultaneously identify possible metabolic paths from substrates to products while minimizing the number of species required to catalyze these metabolic reactions. We apply our algorithm to successfully identify minimal communities both in a set of simple toy problems and in more complex, realistic settings, and to investigate metabolic capacities in the gut microbiome. Our framework adds to the growing toolset for supporting informed microbial community engineering and for ultimately realizing the full potential of such engineering efforts. Availability and implementation: The algorithm source code, compilation, usage instructions and examples are available under a non-commercial research use only license at https://github.com/borenstein-lab/CoMiDA. Contact: elbo@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153571

  20. Optimal design of groundwater remediation system using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun

    2014-11-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.

  1. A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.

    2008-08-01

    Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.

  2. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  3. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  4. Adaptive Particle Swarm Optimizer with Varying Acceleration Coefficients for Finding the Most Stable Conformer of Small Molecules.

    PubMed

    Agrawal, Shikha; Silakari, Sanjay; Agrawal, Jitendra

    2015-11-01

    A novel parameter automation strategy for Particle Swarm Optimization called APSO (Adaptive PSO) is proposed. The algorithm is designed to efficiently control the local search and convergence to the global optimum solution. Parameters c1 controls the impact of the cognitive component on the particle trajectory and c2 controls the impact of the social component. Instead of fixing the value of c1 and c2 , this paper updates the value of these acceleration coefficients by considering time variation of evaluation function along with varying inertia weight factor in PSO. Here the maximum and minimum value of evaluation function is use to gradually decrease and increase the value of c1 and c2 respectively. Molecular energy minimization is one of the most challenging unsolved problems and it can be formulated as a global optimization problem. The aim of the present paper is to investigate the effect of newly developed APSO on the highly complex molecular potential energy function and to check the efficiency of the proposed algorithm to find the global minimum of the function under consideration. The proposed algorithm APSO is therefore applied in two cases: Firstly, for the minimization of a potential energy of small molecules with up to 100 degrees of freedom and finally for finding the global minimum energy conformation of 1,2,3-trichloro-1-flouro-propane molecule based on a realistic potential energy function. The computational results of all the cases show that the proposed method performs significantly better than the other algorithms. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  6. Optimal Campaign Strategies in Fractional-Order Smoking Dynamics

    NASA Astrophysics Data System (ADS)

    Zeb, Anwar; Zaman, Gul; Jung, Il Hyo; Khan, Madad

    2014-06-01

    This paper deals with the optimal control problem in the giving up smoking model of fractional order. For the eradication of smoking in a community, we introduce three control variables in the form of education campaign, anti-smoking gum, and anti-nicotive drugs/medicine in the proposed fractional order model. We discuss the necessary conditions for the optimality of a general fractional optimal control problem whose fractional derivative is described in the Caputo sense. In order to do this, we minimize the number of potential and occasional smokers and maximize the number of ex-smokers. We use Pontryagin's maximum principle to characterize the optimal levels of the three controls. The resulting optimality system is solved numerically by MATLAB.

  7. From molecule to solid: The prediction of organic crystal structures

    NASA Astrophysics Data System (ADS)

    Dzyabchenko, A. V.

    2008-10-01

    A method for predicting the structure of a molecular crystal based on the systematic search for a global potential energy minimum is considered. The method takes into account unequal occurrences of the structural classes of organic crystals and symmetry of the multidimensional configuration space. The programs of global minimization PMC, comparison of crystal structures CRYCOM, and approximation to the distributions of the electrostatic potentials of molecules FitMEP are presented as tools for numerically solving the problem. Examples of predicted structures substantiated experimentally and the experience of author’s participation in international tests of crystal structure prediction organized by the Cambridge Crystallographic Data Center (Cambridge, UK) are considered.

  8. Lessons Learned and Process Improvement for Payload Operations at the Launch Site

    NASA Technical Reports Server (NTRS)

    Catena, John; Gates, Donald, Jr.; Blaney, Kermit, Sr.; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    For every space mission, there are challenges with the launch site/field operations process that are addressed too late in the development cycle. This potentially causes schedule delays, cost overruns, and adds risk to the mission success. This paper will discuss how a single interface, representing the payload at the launch site in all phases of development, will mitigate risk, and minimize or even alleviate potential problems later on. Experience has shown that a single interface between the project and the launch site allows for issues to be worked in a timely manner and bridges the gap between two diverse cultures.

  9. Evaluation of Dynamic Passing Sight Distance Problem Using a Finite Element Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Xuedong; Radwan, Essam; Zhang, Fan

    2008-06-01

    Sufficient passing sight distance is an important control for two-lane rural highway design to minimize the possibility of a head-on collision between passing and opposing vehicles. Traditionally, passing zones are marked by checking passing sight distance that is potentially restricted by static sight obstructions. Such obstructions include crest curves, overpasses, and lateral objects along highways. This paper proposes a new concept of dynamic sight-distance assessment, which involves restricted passing sight distances due to the impeding vehicles that are traveling in the same direction. Using a finite-element model, the dynamic passing sight-distance problem was evaluated, and the writers analyzed the relationshipsmore » between the available passing sight distance and other factors such as the horizontal curve radius, impeding vehicle dimensions, and a driver s following distance. It was found that the impeding vehicles may cause substantially insufficient passing sight distances, which may lead to potential traffic safety problems. It is worthwhile to expand on this safety issue and consider the dynamic passing sight distance in highway design.« less

  10. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Padula, Sharon L.

    1990-01-01

    Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  12. Does finite-temperature decoding deliver better optima for noisy Hamiltonians?

    NASA Astrophysics Data System (ADS)

    Ochoa, Andrew J.; Nishimura, Kohji; Nishimori, Hidetoshi; Katzgraber, Helmut G.

    The minimization of an Ising spin-glass Hamiltonian is an NP-hard problem. Because many problems across disciplines can be mapped onto this class of Hamiltonian, novel efficient computing techniques are highly sought after. The recent development of quantum annealing machines promises to minimize these difficult problems more efficiently. However, the inherent noise found in these analog devices makes the minimization procedure difficult. While the machine might be working correctly, it might be minimizing a different Hamiltonian due to the inherent noise. This means that, in general, the ground-state configuration that correctly minimizes a noisy Hamiltonian might not minimize the noise-less Hamiltonian. Inspired by rigorous results that the energy of the noise-less ground-state configuration is equal to the expectation value of the energy of the noisy Hamiltonian at the (nonzero) Nishimori temperature [J. Phys. Soc. Jpn., 62, 40132930 (1993)], we numerically study the decoding probability of the original noise-less ground state with noisy Hamiltonians in two space dimensions, as well as the D-Wave Inc. Chimera topology. Our results suggest that thermal fluctuations might be beneficial during the optimization process in analog quantum annealing machines.

  13. Unfazed or Dazed and Confused: Does Early Adolescent Marijuana Use Cause Sustained Impairments in Attention and Academic Functioning?

    PubMed Central

    Pardini, Dustin; White, Helene; Xiong, Shuangyan; Bechtold, Jordan; Chung, Tammy; Loeber, Rolf; Hipwell, Alison

    2015-01-01

    There is some suggestion that heavy marijuana use during early adolescence (prior to age 17) may cause significant impairments in attention and academic functioning that remain following sustained periods of abstinence. However, no longitudinal studies have examined whether both male and female adolescents who engage in low (less than once a month) to moderate (at least once a monthly) marijuana use experience increased problems with attention and academic performance, and whether these problems remain following sustained abstinence. The current study used within-individual change models to control for all potential pre-existing and time-stable confounds when examining this potential causal association in two gender-specific longitudinal samples assessed annually from ages 11 to 16 (Pittsburgh Youth Study N=479; Pittsburgh Girls Study N=2296). Analyses also controlled for the potential influence of several pertinent time-varying factors (e.g., other substance use, peer delinquency). Prior to controlling for time-varying confounds, analyses indicated that adolescents tended to experience an increase in parent-reported attention and academic problems, relative to their pre-onset levels, during years when they used marijuana. After controlling for several time-varying confounds, only the association between marijuana use and attention problems in the sample of girls remained statistically significant. There was no evidence indicating that adolescents who used marijuana experienced lingering attention and academic problems, relative to their pre-onset levels, after abstaining from use for at least a year. These results suggest that adolescents who engage in low to moderate marijuana use experience an increase in observable attention and academic problems, but these problems appear to be minimal and are eliminated following sustained abstinence. PMID:25862212

  14. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  15. Unfazed or Dazed and Confused: Does Early Adolescent Marijuana Use Cause Sustained Impairments in Attention and Academic Functioning?

    PubMed

    Pardini, Dustin; White, Helene R; Xiong, Shuangyan; Bechtold, Jordan; Chung, Tammy; Loeber, Rolf; Hipwell, Alison

    2015-10-01

    There is some suggestion that heavy marijuana use during early adolescence (prior to age 17) may cause significant impairments in attention and academic functioning that remain despite sustained periods of abstinence. However, no longitudinal studies have examined whether both male and female adolescents who engage in low (less than once a month) to moderate (at least once a monthly) marijuana use experience increased problems with attention and academic performance, and whether these problems remain following sustained abstinence. The current study used within-individual change models to control for all potential pre-existing and time-stable confounds when examining this potential causal association in two gender-specific longitudinal samples assessed annually from ages 11 to 16 (Pittsburgh Youth Study N = 479; Pittsburgh Girls Study N = 2296). Analyses also controlled for the potential influence of several pertinent time-varying factors (e.g., other substance use, peer delinquency). Prior to controlling for time-varying confounds, analyses indicated that adolescents tended to experience an increase in parent-reported attention and academic problems, relative to their pre-onset levels, during years when they used marijuana. After controlling for several time-varying confounds, only the association between marijuana use and attention problems in the sample of girls remained statistically significant. There was no evidence indicating that adolescents who used marijuana experienced lingering attention and academic problems, relative to their pre-onset levels, after abstaining from use for at least a year. These results suggest that adolescents who engage in low to moderate marijuana use experience an increase in observable attention and academic problems, but these problems appear to be minimal and are eliminated following sustained abstinence.

  16. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  17. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin

    2016-12-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.

  18. Computational methods for reactive transport modeling: An extended law of mass-action, xLMA, method for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.

    2016-10-01

    We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a unified open-source framework for modeling chemically reactive systems.

  19. The high energy astronomy observatories

    NASA Technical Reports Server (NTRS)

    Neighbors, A. K.; Doolittle, R. F.; Halpers, R. E.

    1977-01-01

    The forthcoming NASA project of orbiting High Energy Astronomy Observatories (HEAO's) designed to probe the universe by tracing celestial radiations and particles is outlined. Solutions to engineering problems concerning HEAO's which are integrated, yet built to function independently are discussed, including the onboard digital processor, mirror assembly and the thermal shield. The principle of maximal efficiency with minimal cost and the potential capability of the project to provide explanations to black holes, pulsars and gamma-ray bursts are also stressed. The first satellite is scheduled for launch in April 1977.

  20. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  1. Control algorithms for dynamic attenuators

    PubMed Central

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-01-01

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818

  2. Environmental liability and the onshore oil and gas prospector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, J.A.; Davis, P.

    Environmental liability can be transferred to the oil or gas prospector along with the conveyance of an oil or gas lease. Should soil or groundwater contamination be discovered on a lease, an innocent owner or operator could be liable under state and federal environmental laws for court-ordered remediation costs if potentially responsible parties were unavailable or insolvent. Potential environmental liabilities can be minimized, however, by a preconveyance survey. Existing storage tanks, wells, pipelines, and other anthropogenic features on site should be inspected and photographically documented, as should evidence of previous spills or leaks such as discolored soil and distressed vegetation.more » Land use and ownership history can be documented from historical maps, aerial photographs, tax records, and even interviews with knowledgeable sources. Contaminated groundwater from offsite sources even miles away may migrate onto potential drill sites. Offsite reconnaissance and a review of the Environmental Protection Agency, state, and local environmental agency lists of contaminated sites in the area of the prospective lease provide information to help the potential lessee evaluate this risk. The cost to research and document potential environmental problems on or in the vicinity of a lease is a fraction of the cost required to develop an oil or gas prospect. Performing a preconveyance environmental survey may be the best way to minimize environmental liability and subsequent costs of cleanup and damages in court-ordered remediation.« less

  3. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  4. Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.

    PubMed

    Wang, Changlong; Peng, Jigen

    2018-01-01

    The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.

  5. Reformulation of the covering and quantizer problems as ground states of interacting particles.

    PubMed

    Torquato, S

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  6. Reformulation of the covering and quantizer problems as ground states of interacting particles

    NASA Astrophysics Data System (ADS)

    Torquato, S.

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  7. A multi-objective decision-making approach to the journal submission problem.

    PubMed

    Wong, Tony E; Srikrishnan, Vivek; Hadka, David; Keller, Klaus

    2017-01-01

    When researchers complete a manuscript, they need to choose a journal to which they will submit the study. This decision requires to navigate trade-offs between multiple objectives. One objective is to share the new knowledge as widely as possible. Citation counts can serve as a proxy to quantify this objective. A second objective is to minimize the time commitment put into sharing the research, which may be estimated by the total time from initial submission to final decision. A third objective is to minimize the number of rejections and resubmissions. Thus, researchers often consider the trade-offs between the objectives of (i) maximizing citations, (ii) minimizing time-to-decision, and (iii) minimizing the number of resubmissions. To complicate matters further, this is a decision with multiple, potentially conflicting, decision-maker rationalities. Co-authors might have different preferences, for example about publishing fast versus maximizing citations. These diverging preferences can lead to conflicting trade-offs between objectives. Here, we apply a multi-objective decision analytical framework to identify the Pareto-front between these objectives and determine the set of journal submission pathways that balance these objectives for three stages of a researcher's career. We find multiple strategies that researchers might pursue, depending on how they value minimizing risk and effort relative to maximizing citations. The sequences that maximize expected citations within each strategy are generally similar, regardless of time horizon. We find that the "conditional impact factor"-impact factor times acceptance rate-is a suitable heuristic method for ranking journals, to strike a balance between minimizing effort objectives and maximizing citation count. Finally, we examine potential co-author tension resulting from differing rationalities by mapping out each researcher's preferred Pareto front and identifying compromise submission strategies. The explicit representation of trade-offs, especially when multiple decision-makers (co-authors) have different preferences, facilitates negotiations and can support the decision process.

  8. A multi-objective decision-making approach to the journal submission problem

    PubMed Central

    Hadka, David; Keller, Klaus

    2017-01-01

    When researchers complete a manuscript, they need to choose a journal to which they will submit the study. This decision requires to navigate trade-offs between multiple objectives. One objective is to share the new knowledge as widely as possible. Citation counts can serve as a proxy to quantify this objective. A second objective is to minimize the time commitment put into sharing the research, which may be estimated by the total time from initial submission to final decision. A third objective is to minimize the number of rejections and resubmissions. Thus, researchers often consider the trade-offs between the objectives of (i) maximizing citations, (ii) minimizing time-to-decision, and (iii) minimizing the number of resubmissions. To complicate matters further, this is a decision with multiple, potentially conflicting, decision-maker rationalities. Co-authors might have different preferences, for example about publishing fast versus maximizing citations. These diverging preferences can lead to conflicting trade-offs between objectives. Here, we apply a multi-objective decision analytical framework to identify the Pareto-front between these objectives and determine the set of journal submission pathways that balance these objectives for three stages of a researcher’s career. We find multiple strategies that researchers might pursue, depending on how they value minimizing risk and effort relative to maximizing citations. The sequences that maximize expected citations within each strategy are generally similar, regardless of time horizon. We find that the “conditional impact factor”—impact factor times acceptance rate—is a suitable heuristic method for ranking journals, to strike a balance between minimizing effort objectives and maximizing citation count. Finally, we examine potential co-author tension resulting from differing rationalities by mapping out each researcher’s preferred Pareto front and identifying compromise submission strategies. The explicit representation of trade-offs, especially when multiple decision-makers (co-authors) have different preferences, facilitates negotiations and can support the decision process. PMID:28582430

  9. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  10. Scheduling with non-decreasing deterioration jobs and variable maintenance activities on a single machine

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia

    2017-01-01

    There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.

  11. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  12. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  13. Minimizing the Diameter of a Network Using Shortcut Edges

    NASA Astrophysics Data System (ADS)

    Demaine, Erik D.; Zadimoghaddam, Morteza

    We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.

  14. Finite element meshing approached as a global minimization process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.

    2000-03-01

    The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within amore » charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.« less

  15. Associations between social vulnerabilities and psychosocial problems in European children. Results from the IDEFICS study.

    PubMed

    Iguacel, Isabel; Michels, Nathalie; Fernández-Alvira, Juan M; Bammann, Karin; De Henauw, Stefaan; Felső, Regina; Gwozdz, Wencke; Hunsberger, Monica; Reisch, Lucia; Russo, Paola; Tornaritis, Michael; Thumann, Barbara Franziska; Veidebaum, Toomas; Börnhorst, Claudia; Moreno, Luis A

    2017-09-01

    The effect of socioeconomic inequalities on children's mental health remains unclear. This study aims to explore the cross-sectional and longitudinal associations between social vulnerabilities and psychosocial problems, and the association between accumulation of vulnerabilities and psychosocial problems. 5987 children aged 2-9 years from eight European countries were assessed at baseline and 2-year follow-up. Two different instruments were employed to assess children's psychosocial problems: the KINDL (Questionnaire for Measuring Health-Related Quality of Life in Children and Adolescents) was used to evaluate children's well-being and the Strengths and Difficulties Questionnaire (SDQ) was used to evaluate children's internalising problems. Vulnerable groups were defined as follows: children whose parents had minimal social networks, children from non-traditional families, children of migrant origin or children with unemployed parents. Logistic mixed-effects models were used to assess the associations between social vulnerabilities and psychosocial problems. After adjusting for classical socioeconomic and lifestyle indicators, children whose parents had minimal social networks were at greater risk of presenting internalising problems at baseline and follow-up (OR 1.53, 99% CI 1.11-2.11). The highest risk for psychosocial problems was found in children whose status changed from traditional families at T0 to non-traditional families at T1 (OR 1.60, 99% CI 1.07-2.39) and whose parents had minimal social networks at both time points (OR 1.97, 99% CI 1.26-3.08). Children with one or more vulnerabilities accumulated were at a higher risk of developing psychosocial problems at baseline and follow-up. Therefore, policy makers should implement measures to strengthen the social support for parents with a minimal social network.

  16. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  17. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities

    NASA Astrophysics Data System (ADS)

    Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro

    2017-09-01

    An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.

  18. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  19. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  20. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  1. Radiatively Generating the Higgs Potential and Electroweak Scale via the Seesaw Mechanism.

    PubMed

    Brivio, Ilaria; Trott, Michael

    2017-10-06

    The minimal seesaw scenario can radiatively generate the Higgs potential to induce electroweak symmetry breaking while supplying an origin of the Higgs vacuum expectation value from an underlying Majorana scale. If the Higgs potential and (derived) electroweak scale have this origin, the heavy SU(3)×SU(2)×U(1)_{Y} singlet states are expected to reside at m_{N}∼10-500  PeV for couplings |ω|∼10^{-4.5}-10^{-6} between the Majorana sector and the standard model. In this framework, the usual challenge of the electroweak scale hierarchy problem with a classically assumed potential is absent as the electroweak scale is not a fundamental scale. The new challenge is the need to generate or accommodate PeV Majorana mass scales while simultaneously suppressing tree-level contributions to the potential in ultraviolet models.

  2. The inverse problem of brain energetics: ketone bodies as alternative substrates

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Occhipinti, R.; Somersalo, E.

    2008-07-01

    Little is known about brain energy metabolism under ketosis, although there is evidence that ketone bodies have a neuroprotective role in several neurological disorders. We investigate the inverse problem of estimating reaction fluxes and transport rates in the different cellular compartments of the brain, when the data amounts to a few measured arterial venous concentration differences. By using a recently developed methodology to perform Bayesian Flux Balance Analysis and a new five compartment model of the astrocyte-glutamatergic neuron cellular complex, we are able to identify the preferred biochemical pathways during shortage of glucose and in the presence of ketone bodies in the arterial blood. The analysis is performed in a minimally biased way, therefore revealing the potential of this methodology for hypothesis testing.

  3. Common elements of adolescent prevention programs: minimizing burden while maximizing reach.

    PubMed

    Boustani, Maya M; Frazier, Stacy L; Becker, Kimberly D; Bechor, Michele; Dinizulu, Sonya M; Hedemann, Erin R; Ogle, Robert R; Pasalich, Dave S

    2015-03-01

    A growing number of evidence-based youth prevention programs are available, but challenges related to dissemination and implementation limit their reach and impact. The current review identifies common elements across evidence-based prevention programs focused on the promotion of health-related outcomes in adolescents. We reviewed and coded descriptions of the programs for common practice and instructional elements. Problem-solving emerged as the most common practice element, followed by communication skills, and insight building. Psychoeducation, modeling, and role play emerged as the most common instructional elements. In light of significant comorbidity in poor outcomes for youth, and corresponding overlap in their underlying skills deficits, we propose that synthesizing the prevention literature using a common elements approach has the potential to yield novel information and inform prevention programming to minimize burden and maximize reach and impact for youth.

  4. Potential contamination of shipboard air samples by diffusive emissions of PCBs and other organic pollutants: implications and solutions.

    PubMed

    Lohmann, Rainer; Jaward, Foday M; Durham, Louise; Barber, Jonathan L; Ockenden, Wendy; Jones, Kevin C; Bruhn, Regina; Lakaschus, Soenke; Dachs, Jordi; Booij, Kees

    2004-07-15

    Air samples were taken onboard the RRS Bransfield on an Atlantic cruise from the United Kingdom to Halley, Antarctica, from October to December 1998, with the aim of establishing PCB oceanic background air concentrations and assessing their latitudinal distribution. Great care was taken to minimize pre- and post-collection contamination of the samples, which was validated through stringent QA/QC procedures. However, there is evidence that onboard contamination of the air samples occurred,following insidious, diffusive emissions on the ship. Other data (for PCBs and other persistent organic pollutants (POPs)) and examples of shipboard contamination are presented. The implications of these findings for past and future studies of global POPs distribution are discussed. Recommendations are made to help critically appraise and minimize the problems of insidious/diffusive shipboard contamination.

  5. Surgery of the mind and mood: a mosaic of issues in time and evolution.

    PubMed

    Heller, A Chris; Amar, Arun P; Liu, Charles Y; Apuzzo, Michael L J

    2006-10-01

    The prevalence and economic burden of neuropsychiatric disease are enormous. The surgical treatment of these psychiatric disorders, although potentially valuable, remains one of the most controversial subjects in medicine, as its concept and potential reality raises thorny issues of moral, ethical, and socioeconomic consequence. This article traces the roots of concept and surgical efforts in this turbulent area from prehistory to the 21st century. The details of the late 19th and 20th century evolution of approaches to the problem of intractable psychiatric diseases with scrutiny of the persona and contributions of the key individuals Gottlieb Burckhardt, John Fulton, Egas Moniz, Walter Freeman, James Watts, and William Scoville are presented as a foundation for the later, more logically refined approaches of Lars Leksell, Peter Lindstrom, Geoffrey Knight, Jean Talaraich, and Desmond Kelly. These refinements, characterized by progressive minimalism and founded on a better comprehension of underlying pathways of normal function and disease states, have been further explored with recent advances in imaging, which have allowed the emergence of less invasive and technology driven non-ablative surgical directives toward these problematical disorders of mind and mood. The application of therapies based on imaging comprehension of pathway and relay abnormalities, along with explorations of the notion of surgical minimalism, promise to serve as an impetus for revival of an active surgical effort in this key global health and socioeconomic problem. Eventual coupling of cellular and molecular biology and nanotechnology with surgical enterprise is on the horizon.

  6. Surgery of the mind and mood: a mosaic of issues in time and evolution.

    PubMed

    Heller, A Chris; Amar, Arun P; Liu, Charles Y; Apuzzo, Michael L J

    2008-06-01

    The prevalence and economic burden of neuropsychiatric disease are enormous. The surgical treatment of these psychiatric disorders, although potentially valuable, remains one of the most controversial subjects in medicine, as its concept and potential reality raises thorny issues of moral, ethical, and socioeconomic consequence. This article traces the roots of concept and surgical efforts in this turbulent area from prehistory to the 21st century. The details of the late 19th and 20th century evolution of approaches to the problem of intractable psychiatric diseases with scrutiny of the persona and contributions of the key individuals Gottlieb Burckhardt, John Fulton, Egas Moniz, Walter Freeman, James Watts, and William Scoville are presented as a foundation for the later, more logically refined approaches of Lars Leksell, Peter Lindstrom, Geoffrey Knight, Jean Talaraich, and Desmond Kelly. These refinements, characterized by progressive minimalism and founded on a better comprehension of underlying pathways of normal function and disease states, have been further explored with recent advances in imaging, which have allowed the emergence of less invasive and technology driven non-ablative surgical directives toward these problematical disorders of mind and mood. The application of therapies based on imaging comprehension of pathway and relay abnormalities, along with explorations of the notion of surgical minimalism, promise to serve as an impetus for revival of an active surgical effort in this key global health and socioeconomic problem. Eventual coupling of cellular and molecular biology and nanotechnology with surgical enterprise is on the horizon.

  7. Hospital-acquired listeriosis associated with sandwiches in the UK: a cause for concern.

    PubMed

    Little, C L; Amar, C F L; Awofisayo, A; Grant, K A

    2012-09-01

    Hospital-acquired outbreaks of listeriosis are not commonly reported but remain a significant public health problem. To raise awareness of listeriosis outbreaks that have occurred in hospitals and describe actions that can be taken to minimize the risk of foodborne listeriosis to vulnerable patients. Foodborne outbreaks and incidents of Listeria monocytogenes reported to the Health Protection Agency national surveillance systems were investigated and those linked to hospitals were extracted. The data were analysed to identify the outbreak/incident setting, the food vehicle, outbreak contributory factors and origin of problem. Most (8/11, 73%) foodborne outbreaks of listeriosis that occurred in the UK between 1999 and 2011 were associated with sandwiches purchased from or provided in hospitals. Recurrently in the outbreaks the infecting subtype of L. monocytogenes was detected in supplied prepacked sandwiches and sandwich manufacturing environments. In five of the outbreaks breaches in cold chain controls of food also occurred at hospital level. The outbreaks highlight the potential for sandwiches contaminated with L. monocytogenes to cause severe infection in vulnerable people. Control of L. monocytogenes in sandwich manufacturing and within hospitals is essential to minimize the potential for consumption of this bacterium at levels hazardous to health. Manufacturers supplying sandwiches to hospitals should aim to ensure absence of L. monocytogenes in sandwiches at the point of production and hospital-documented food safety management systems should ensure the integrity of the food cold chain. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  8. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  9. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  10. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  11. Optimal aeroassisted coplanar orbital transfer using an energy model

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Taylor, Deborah B.

    1989-01-01

    The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.

  12. Risk Assessment and Management for Medically Complex Potential Living Kidney Donors: A Few Deontological Criteria and Ethical Values

    PubMed Central

    Petrini, Carlo

    2011-01-01

    A sound evaluation of every bioethical problem should be predicated on a careful analysis of at least two basic elements: (i) reliable scientific information and (ii) the ethical principles and values at stake. A thorough evaluation of both elements also calls for a careful examination of statements by authoritative institutions. Unfortunately, in the case of medically complex living donors neither element gives clear-cut answers to the ethical problems raised. Likewise, institutionary documents frequently offer only general criteria, which are not very helpful when making practical choices. This paper first introduces a brief overview of scientific information, ethical values, and institutionary documents; the notions of “acceptable risk” and “minimal risk” are then briefly examined, with reference to the problem of medically complex living donors. The so-called precautionary principle and the value of solidarity are then discussed as offering a possible approach to the ethical problem of medically complex living donors. PMID:22174982

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Thomas W.; Quach, Tu-Thach; Detry, Richard Joseph

    Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to pervasive interdependencies and attendant vulnerabilities to cascades in associated systems. Phoenix was initiated to address this high-impact problem space as engineers. Our overarching goals are maximizing security, maximizing health, and minimizing risk. We design interventions, or problem solutions, that influence CASoS to achieve specific aspirations. Through application to real-world problems, Phoenix is evolving the principles and discipline ofmore » CASoS Engineering while growing a community of practice and the CASoS engineers to populate it. Both grounded in reality and working to extend our understanding and control of that reality, Phoenix is at the same time a solution within a CASoS and a CASoS itself.« less

  14. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  15. Thick section aluminum weldments for SRB structures

    NASA Technical Reports Server (NTRS)

    Bayless, E.; Sexton, J.

    1978-01-01

    The Space Shuttle Solid Rocket Booster (SRB) forward and aft skirts were designed with fracture control considerations used in the design data. Fracture control is based on reliance upon nondestructive evaluation (NDE) techniques to detect potentially critical flaws. In the aerospace industry, welds on aluminum in the thicknesses (0.500 to 1.375 in.) such as those encountered on the SRB skirts are normally welded from both sides to minimize distortion. This presents a problem with the potential presence of undefined areas of incomplete fusion and the inability to detect these potential flaws by NDE techniques. To eliminate the possibility of an undetectable defect, weld joint design was revised to eliminate blind root penetrations. Weld parameters and mechanical property data were developed to verify the adequacy of the new joint design.

  16. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  17. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  18. [Recovery of consciousness: process-oriented approach].

    PubMed

    Gusarova, S B

    2014-01-01

    Traditionally psychological neurorehabilitation of neurosurgical patients is provided subject to availability of clear consciousness and minimal potential to communicate verbally. Cognitive and emotional disorders, problems in social adaptation, neurotic syndromes are normally targets in such cases. We work with patients having survived severe brain damage being in different states of consciousness: vegetative state, minimal state of consciousness, mutism, confusion, posttraumatic Korsaroff syndrom. Psychologist considers recovery of consciousness as the target besides traditional tasks. Construction of communication with patient is central part of such job, where the patient remains unable to contact verbally, yet it is impossible to consider potential aphasia. This is a non-verbal "dialogue" with patient created by psychologist with gradual development and involving other people and objects of environment. Inline with modern neuroscientific achievements demonstrating ability to recognize by patients with severe brain injury (A. Owen, S. Laureys, M. Monti, M. Coleman, A. Soddu, M. Boly and others) we base upon psychological science, on psychotherapeutic approaches containing instruments inevitable to work with patients in altered states of consciousness and creation of non-verbal communication with patient (Jung, Reich, Alexander, Lowen, Keleman, Arnold and Amy Mindell, S. Tomandl, D. Boadella, A. Längle, P. Levin etc). This article will include 15 years of experience to apply Process-oriented approach by A. Mindell to recovery of consciousness of neurosurgical patients based on work with "minimal signals" (micro moves, breath, mimic reactions etc.), principle of feedback, psychosomatic resonance, empathy.

  19. Evaluation of Brief Group-Administered Instruction for Parents to Prevent or Minimize Sleep Problems in Young Children with Down Syndrome

    ERIC Educational Resources Information Center

    Stores, Rebecca; Stores, Gregory

    2004-01-01

    Background: The study concerns the unknown value of group instruction for mothers of young children with Down syndrome (DS) in preventing or minimizing sleep problems. Method: (1) Children with DS were randomly allocated to an Instruction group (given basic information about children's sleep) and a Control group for later comparison including…

  20. Does Self-Help Increase Rates of Help Seeking for Student Mental Health Problems by Minimizing Stigma as a Barrier?

    ERIC Educational Resources Information Center

    Levin, Michael E.; Krafft, Jennifer; Levin, Crissa

    2018-01-01

    Objective: This study examined whether self-help (books, websites, mobile apps) increases help seeking for mental health problems among college students by minimizing stigma as a barrier. Participants and Methods: A survey was conducted with 200 college students reporting elevated distress from February to April 2017. Results: Intentions to use…

  1. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  2. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  3. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  4. Defect-free atomic array formation using the Hungarian matching algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2017-05-01

    Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.

  5. How thermal inflation can save minimal hybrid inflation in supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimopoulos, Konstantinos; Owen, Charlotte

    2016-10-12

    Minimal hybrid inflation in supergravity has been ruled out by the 2015 Planck observations because the spectral index of the produced curvature perturbation falls outside observational bounds. To resurrect the model, a number of modifications have been put forward but many of them spoil the accidental cancellation that resolves the η-problem and require complicated Kähler constructions to counterbalance the lost cancellation. In contrast, in this paper the model is rendered viable by supplementing the scenario with a brief period of thermal inflation, which follows the reheating of primordial inflation. The scalar field responsible for thermal inflation requires a large non-zeromore » vacuum expectation value (VEV) and a flat potential. We investigate the VEV of such a flaton field and its subsequent effect on the inflationary observables. We find that, for large VEV, minimal hybrid inflation in supergravity produces a spectral index within the 1-σ Planck bound and a tensor-to-scalar ratio which may be observable in the near future. The mechanism is applicable to other inflationary models.« less

  6. A videoscope for use in minimally invasive periodontal surgery.

    PubMed

    Harrel, Stephen K; Wilson, Thomas G; Rivera-Hidalgo, Francisco

    2013-09-01

    Minimally invasive periodontal procedures have been reported to produce excellent clinical results. Visualization during minimally invasive procedures has traditionally been obtained by the use of surgical telescopes, surgical microscopes, glass fibre endoscopes or a combination of these devices. All of these methods for visualization are less than fully satisfactory due to problems with access, magnification and blurred imaging. A videoscope for use with minimally invasive periodontal procedures has been developed to overcome some of the difficulties that exist with current visualization approaches. This videoscope incorporates a gas shielding technology that eliminates the problems of fogging and fouling of the optics of the videoscope that has previously prevented the successful application of endoscopic visualization to periodontal surgery. In addition, as part of the gas shielding technology the videoscope also includes a moveable retractor specifically adapted for minimally invasive surgery. The clinical use of the videoscope during minimally invasive periodontal surgery is demonstrated and discussed. The videoscope with gas shielding alleviates many of the difficulties associated with visualization during minimally invasive periodontal surgery. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  8. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  9. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  10. A multiobjective modeling approach to locate multi-compartment containers for urban-sorted waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tralhao, Lino, E-mail: lmlrt@inescc.p; Coutinho-Rodrigues, Joao, E-mail: coutinho@dec.uc.p; Alcada-Almeida, Luis, E-mail: alcada@inescc.p

    2010-12-15

    The location of multi-compartment sorted waste containers for recycling purposes in cities is an important problem in the context of urban waste management. The costs associated with those facilities and the impacts placed on populations are important concerns. This paper introduces a mixed-integer, multiobjective programming approach to identify the locations and capacities of such facilities. The approach incorporates an optimization model in a Geographical Information System (GIS)-based interactive decision support system that includes four objectives. The first objective minimizes the total investment cost; the second one minimizes the average distance from dwellings to the respective multi-compartment container; the last twomore » objectives address the 'pull' and 'push' characteristics of the decision problem, one by minimizing the number of individuals too close to any container, and the other by minimizing the number of dwellings too far from the respective multi-compartment container. The model determines the number of facilities to be opened, the respective container capacities, their locations, their respective shares of the total waste of each type to be collected, and the dwellings assigned to each facility. The approach proposed was tested with a case study for the historical center of Coimbra city, Portugal, where a large urban renovation project, addressing about 800 buildings, is being undertaken. This paper demonstrates that the models and techniques incorporated in the interactive decision support system (IDSS) can be used to assist a decision maker (DM) in analyzing this complex problem in a realistically sized urban application. Ten solutions consisting of different combinations of underground containers for the disposal of four types of sorted waste in 12 candidate sites, were generated. These solutions and tradeoffs among the objectives are presented to the DM via tables, graphs, color-coded maps and other graphics. The DM can then use this information to 'guide' the IDSS in identifying additional solutions of potential interest. Nevertheless, this research showed that a particular solution with a better objective balance can be identified. The actual sequence of additional solutions generated will depend upon the objectives and preferences of the DM in a specific application.« less

  11. Adoption of waste minimization technology to benefit electroplaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K.

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution controlmore » measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.« less

  12. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    PubMed

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Single and multiple objective biomass-to-biofuel supply chain optimization considering environmental impacts

    NASA Astrophysics Data System (ADS)

    Valles Sosa, Claudia Evangelina

    Bioenergy has become an important alternative source of energy to alleviate the reliance on petroleum energy. Bioenergy offers diminishing climate change by reducing Green House Gas Emissions, as well as providing energy security and enhancing rural development. The Energy Independence and Security Act mandate the use of 21 billion gallons of advanced biofuels including 16 billion gallons of cellulosic biofuels by the year 2022. It is clear that Biomass can make a substantial contribution to supply future energy demand in a sustainable way. However, the supply of sustainable energy is one of the main challenges that mankind will face over the coming decades. For instance, many logistical challenges will be faced in order to provide an efficient and reliable supply of quality feedstock to biorefineries. 700 million tons of biomass will be required to be sustainably delivered to biorefineries annually to meet the projected use of biofuels by the year of 2022. Approaching this complex logistic problem as a multi-commodity network flow structure, the present work proposes the use of a genetic algorithm as a single objective optimization problem that considers the maximization of profit and the present work also proposes the use of a Multiple Objective Evolutionary Algorithm to simultaneously maximize profit while minimizing global warming potential. Most transportation optimization problems available in the literature have mostly considered the maximization of profit or the minimization of total travel time as potential objectives to be optimized. However, on this research work, we take a more conscious and sustainable approach for this logistic problem. Planners are increasingly expected to adopt a multi-disciplinary approach, especially due to the rising importance of environmental stewardship. The role of a transportation planner and designer is shifting from simple economic analysis to promoting sustainability through the integration of environmental objectives. To respond to these new challenges, the Modified Multiple Objective Evolutionary Algorithm for the design optimization of a biomass to bio-refinery logistic system that considers the simultaneous maximization of the total profit and the minimization of three environmental impacts is presented. Sustainability balances economic, social and environmental goals and objectives. There exist several works in the literature that have considered economic and environmental objectives for the presented supply chain problem. However, there is a lack of research performed in the social aspect of a sustainable logistics system. This work proposes a methodology to integrate social aspect assessment, based on employment creation. Finally, most of the assessment methodologies considered in the literature only contemplate deterministic values, when in realistic situations uncertainties in the supply chain are present. In this work, Value-at-Risk, an advanced risk measure commonly used in portfolio optimization is included to consider the uncertainties in biofuel prices, among the others.

  14. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  15. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  16. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  17. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  18. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  19. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  20. Geometric versus numerical optimal control of a dissipative spin-(1/2) particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapert, M.; Sugny, D.; Zhang, Y.

    2010-12-15

    We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derrida, B.; Spohn, H.

    We show that the problem of a directed polymer on a tree with disorder can be reduced to the study of nonlinear equations of reaction-diffusion type. These equations admit traveling wave solutions that move at all possible speeds above a certain minimal speed. The speed of the wavefront is the free energy of the polymer problem and the minimal speed corresponds to a phase transition to a glassy phase similar to the spin-glass phase. Several properties of the polymer problem can be extracted from the correspondence with the traveling wave: probability distribution of the free energy, overlaps, etc.

  2. Conventional and reciprocal approaches to the inverse dipole localization problem for N(20)-P (20) somatosensory evoked potentials.

    PubMed

    Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre

    2013-01-01

    The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.

  3. An evidence-based solution for minimizing stress and anger in nursing students.

    PubMed

    Shirey, Maria R

    2007-12-01

    Manifestations of stress and anger are becoming more evident in society. Anger, an emotion associated with stress, often affects other aspects of everyday life, including the workplace and the educational setting. Stress and irrational anger in nursing students presents a potential teaching-learning problem that requires innovative evidence-based solutions. In this article, anger in nursing students is discussed, and background information on the topic is provided. Common sources and manifestations of anger in nursing students are presented, and one evidence-based solution--mindfulness-based-stress reduction--is discussed.

  4. Applications of remote sensing to estuarine management

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Gordon, H. H.; Hennigar, H. F.

    1977-01-01

    Remote sensing was used in the resolution of estuarine problems facing federal and Virginia governmental agencies. A prototype Elizabeth River Surface Circulation Atlas was produced from photogrammetry to aid in oil spill cleanup and source identification. Aerial photo analysis twice led to selection of alternative plans for dredging and spoil disposal which minimized marsh damage. Marsh loss due to a mud wave from a highway dyke was measured on sequential aerial photographs. An historical aerial photographic sequence gave basis to a potential Commonwealth of Virginia legal claim to accreting and migrating coastal islands.

  5. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  6. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  7. Principal Eigenvalue Minimization for an Elliptic Problem with Indefinite Weight and Robin Boundary Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hintermueller, M., E-mail: hint@math.hu-berlin.de; Kao, C.-Y., E-mail: Ckao@claremontmckenna.edu; Laurain, A., E-mail: laurain@math.hu-berlin.de

    2012-02-15

    This paper focuses on the study of a linear eigenvalue problem with indefinite weight and Robin type boundary conditions. We investigate the minimization of the positive principal eigenvalue under the constraint that the absolute value of the weight is bounded and the total weight is a fixed negative constant. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for a species to survive. For rectangular domains with Neumann boundary condition, it is known that there exists a threshold value such that if the total weight is below this thresholdmore » value then the optimal favorable region is like a section of a disk at one of the four corners; otherwise, the optimal favorable region is a strip attached to the shorter side of the rectangle. Here, we investigate the same problem with mixed Robin-Neumann type boundary conditions and study how this boundary condition affects the optimal spatial arrangement.« less

  8. Minimizing the Sum of Completion Times with Resource Dependant Times

    NASA Astrophysics Data System (ADS)

    Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe

    2008-10-01

    We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.

  9. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  10. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  11. Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.

  12. Minimal perceptrons for memorizing complex patterns

    NASA Astrophysics Data System (ADS)

    Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo

    2016-11-01

    Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.

  13. Optimality Principles for Model-Based Prediction of Human Gait

    PubMed Central

    Ackermann, Marko; van den Bogert, Antonie J.

    2010-01-01

    Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736

  14. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  15. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  16. Innovative design of composite structures: Further studies in the use of a curvilinear fiber format to improve structural efficiency

    NASA Technical Reports Server (NTRS)

    Hyer, Michael W.; Charette, Robert F.

    1988-01-01

    Further studies to determine the potential for using a curvilinear fiber format in the design of composite laminates are reported. The curvilinear format is in contrast to the current practice of having the fibers aligned parallel to each other and in a straight line. The problem of a plate with a central circular hole is used as a candidate problem for this study. The study concludes that for inplane tensile loading the curvilinear format is superior. The limited results to date on compression buckling loads indicate that the curvilinear designs are poorer in resistant buckling. However, for the curvilinear design of interest, the reduction in buckling load is minimal and so overall there is a gain in considering the curvilinear design.

  17. The connector space reduction mechanism

    NASA Technical Reports Server (NTRS)

    Milam, M. Bruce

    1990-01-01

    The Connector Space Reduction Mechanism (CSRM) is a simple device that can reduce the number of electromechanical devices on the Payload Interface Adapter/Station Interface Adapter (PIA/SIA) from 4 to 1. The device uses simplicity to attack the heart of the connector mating problem for large interfaces. The CSRM allows blind mate connector mating with minimal alignment required over short distances. This eliminates potential interface binding problems and connector damage. The CSRM is compatible with G and H connectors and Moog Rotary Shutoff fluid couplings. The CSRM can be used also with less forgiving connectors, as was demonstrated in the lab. The CSRM is NASA-Goddard exclusive design with patent applied for. The CSRM is the correct mechanism for the PIA/SIA interface as well as other similar berthing interfaces.

  18. Industrial ecology: a philosophical introduction.

    PubMed Central

    Frosch, R A

    1992-01-01

    By analogy with natural ecosystems, an industrial ecology system, in addition to minimizing waste production in processes, would maximize the economical use of waste materials and of products at the ends of their lives as inputs to other processes and industries. This possibility can be made real only if a number of potential problems can be solved. These include the design of wastes along with the design of products and processes, the economics of such a system, the internalizing of the costs of waste disposal to the design and choice of processes and products, the effects of regulations intended for other purposes, and problems of responsibility and liability. The various stakeholders in making the effects of industry on the environment more benign will need to adopt some new behaviors if the possibility is to become real. PMID:11607255

  19. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1989-01-01

    Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.

  20. Steady state method to determine unsaturated hydraulic conductivity at the ambient water potential

    DOEpatents

    HUbbell, Joel M.

    2014-08-19

    The present invention relates to a new laboratory apparatus for measuring the unsaturated hydraulic conductivity at a single water potential. One or more embodiments of the invented apparatus can be used over a wide range of water potential values within the tensiometric range, requires minimal laboratory preparation, and operates unattended for extended periods with minimal supervision. The present invention relates to a new laboratory apparatus for measuring the unsaturated hydraulic conductivity at a single water potential. One or more embodiments of the invented apparatus can be used over a wide range of water potential values within the tensiometric range, requires minimal laboratory preparation, and operates unattended for extended periods with minimal supervision.

  1. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  2. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  3. Competitive two-agent scheduling problems to minimize the weighted combination of makespans in a two-machine open shop

    NASA Astrophysics Data System (ADS)

    Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia

    2018-04-01

    In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.

  4. [E-Cigarettes – Friend or Foe?].

    PubMed

    Russi, Erich W

    2015-07-01

    Not nicotine, but an abundant amount of toxic chemicals produced by the combustion of tobacco are the cause of well-known health problems. E-cigarette vapor contains no or only minimal quantities of potentially harmful substances. Hence it can be assumed that vaping in adults is much less harmful than smoking of cigarettes. Furthermore, no data exist that e-cigarettes will encourage youngsters to become cigarette smokers. E-cigarette vaping has the potential to reduce the daily number of cigarettes smoked or facilitates cessation of smoking in heavily nicotine-dependent smokers, who keep on smoking despite a structured smoking cessation program. Health professionals should be aware of this type of nicotine substitution, since the controversial discussion is often emotional and not evidence-based.

  5. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  6. Free time minimizers for the three-body problem

    NASA Astrophysics Data System (ADS)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  7. [Minimal emotional dysfunction and first impression formation in personality disorders].

    PubMed

    Linden, M; Vilain, M

    2011-01-01

    "Minimal cerebral dysfunctions" are isolated impairments of basic mental functions, which are elements of complex functions like speech. The best described are cognitive dysfunctions such as reading and writing problems, dyscalculia, attention deficits, but also motor dysfunctions such as problems with articulation, hyperactivity or impulsivity. Personality disorders can be characterized by isolated emotional dysfunctions in relation to emotional adequacy, intensity and responsivity. For example, paranoid personality disorders can be characterized by continuous and inadequate distrust, as a disorder of emotional adequacy. Schizoid personality disorders can be characterized by low expressive emotionality, as a disorder of effect intensity, or dissocial personality disorders can be characterized by emotional non-responsivity. Minimal emotional dysfunctions cause interactional misunderstandings because of the psychology of "first impression formation". Studies have shown that in 100 ms persons build up complex and lasting emotional judgements about other persons. Therefore, minimal emotional dysfunctions result in interactional problems and adjustment disorders and in corresponding cognitive schemata.From the concept of minimal emotional dysfunctions specific psychotherapeutic interventions in respect to the patient-therapist relationship, the diagnostic process, the clarification of emotions and reality testing, and especially an understanding of personality disorders as impairment and "selection, optimization, and compensation" as a way of coping can be derived.

  8. Hydric potential of the river basin: Prądnik, Polish Highlands

    NASA Astrophysics Data System (ADS)

    Lepeška, Tomáš; Radecki-Pawlik, Artur; Wojkowski, Jakub; Walega, Andrzej

    2017-12-01

    Human society deals with floods, drought and water pollution. Facing those problems, the question how to prevent or at least to minimalize the adverse effects of water-related issues is asked of the landscape managers. In this way, any help given to landscape managers seems to be an additional useful tool. Within this paper, an approach leading to mitigation of water-related problems is presented that relates the retention of precipitation and the use of ecosystems as a tool for improving the quality, quantity of water resources and availability throughout the region. One approach is the determination of the landscape's hydric potential (LHP). This study examines one example of using this method within the conditions of Poland. The results of the research show that national data are entirely appropriate for implementation of the LHP method. Further, this approach revealed the classes of the hydric potential of the Prądnik river basin which was selected as the experimental territory. LHP results reflect the ecosystem attributes of the model river basin; areas of average LHP cover 63.26%, areas of high and limited hydric potential cover approximately 18.3% each. The spatial distribution of LHP means the results of this study provide a baseline for management of the river basin.

  9. Health Risk Factors Associated with Lifetime Abstinence from Alcohol in the 1979 National Longitudinal Survey of Youth Cohort.

    PubMed

    Kerr, William C; Lui, Camillia K; Williams, Edwina; Ye, Yu; Greenfield, Thomas K; Lown, E Anne

    2017-02-01

    The choice and definition of a comparison group in alcohol-related health studies remains a prominent issue in alcohol epidemiology due to potential biases in the risk estimates. The most commonly used comparison group has been current abstainers; however, this includes former drinkers who may have quit drinking due to health problems. Lifetime abstention could be the best option, but measurement issues, selection biases due to health and other risk factors, and small numbers in populations are important concerns. This study examines characteristics of lifetime abstention and occasional drinking that are relevant for alcohol-related health studies. This study used data from the National Longitudinal Survey of Youth 1979 cohort of 14 to 21 year olds followed through 2012 (n = 7,515). Definitions of abstinence and occasional drinking were constructed based on multiple measurements. Descriptive analyses were used to compare the definitions, and in further analysis, lifetime abstainers (n = 718) and lifetime minimal drinkers (n = 1,027) were compared with drinkers across demographics and early-life characteristics (i.e., religion, poverty, parental education, and family alcohol problems) in logistic regression models. Using a strict measurement of zero drinks from adolescence to the 50s, only 1.7% of the sample was defined as lifetime abstainer compared to a broader definition allowing a total of 1 drink over the lifetime that included 9.5% and to lifetime minimal drinking (a total of 3 drinks or less a month), which accounted for 13.7%. Factors significantly associated with lifetime abstention and lifetime minimal drinking included religion, poverty, having no family alcohol problems, Hispanic ethnicity, foreign-born, and female gender. Importantly, work-related health limitations in early life were significantly associated, but not childhood physical and mental health problems. Alcohol-related health studies should utilize lifetime classifications of drinkers and abstainers, and, in doing so, should account for early-life socioeconomic adversity and childhood health factors or consider these as unmeasured confounders. Copyright © 2017 by the Research Society on Alcoholism.

  10. Time Investment in Drug Supply Problems by Flemish Community Pharmacies.

    PubMed

    De Weerdt, Elfi; Simoens, Steven; Casteels, Minne; Huys, Isabelle

    2017-01-01

    Introduction: Drug supply problems are a known problem for pharmacies. Community and hospital pharmacies do everything they can to minimize impact on patients. This study aims to quantify the time spent by Flemish community pharmacies on drug supply problems. Materials and Methods: During 18 weeks, employees of 25 community pharmacies filled in a template with the total time spent on drug supply problems. The template stated all the steps community pharmacies could undertake to manage drug supply problems. Results: Considering the median over the study period, the median time spent on drug supply problems was 25 min per week, with a minimum of 14 min per week and a maximum of 38 min per week. After calculating the median of each pharmacy, large differences were observed between pharmacies: about 25% spent less than 15 min per week and one-fifth spent more than 1 h per week. The steps on which community pharmacists spent most time are: (i) "check missing products from orders," (ii) "contact wholesaler/manufacturers regarding potential drug shortages," and (iii) "communicating to patients." These three steps account for about 50% of the total time spent on drug supply problems during the study period. Conclusion: Community pharmacies spend about half an hour per week on drug supply problems. Although 25 min per week does not seem that much, the time spent is not delineated and community pharmacists are constantly confronted with drug supply problems.

  11. Optimal Paths in Gliding Flight

    NASA Astrophysics Data System (ADS)

    Wolek, Artur

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  12. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  13. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  14. A new non-iterative reconstruction method for the electrical impedance tomography problem

    NASA Astrophysics Data System (ADS)

    Ferreira, A. D.; Novotny, A. A.

    2017-03-01

    The electrical impedance tomography (EIT) problem consists in determining the distribution of the electrical conductivity of a medium subject to a set of current fluxes, from measurements of the corresponding electrical potentials on its boundary. EIT is probably the most studied inverse problem since the fundamental works by Calderón from the 1980s. It has many relevant applications in medicine (detection of tumors), geophysics (localization of mineral deposits) and engineering (detection of corrosion in structures). In this work, we are interested in reconstructing a number of anomalies with different electrical conductivity from the background. Since the EIT problem is written in the form of an overdetermined boundary value problem, the idea is to rewrite it as a topology optimization problem. In particular, a shape functional measuring the misfit between the boundary measurements and the electrical potentials obtained from the model is minimized with respect to a set of ball-shaped anomalies by using the concept of topological derivatives. It means that the objective functional is expanded and then truncated up to the second order term, leading to a quadratic and strictly convex form with respect to the parameters under consideration. Thus, a trivial optimization step leads to a non-iterative second order reconstruction algorithm. As a result, the reconstruction process becomes very robust with respect to noisy data and independent of any initial guess. Finally, in order to show the effectiveness of the devised reconstruction algorithm, some numerical experiments into two spatial dimensions are presented, taking into account total and partial boundary measurements.

  15. Generating effective project scheduling heuristics by abstraction and reconstitution

    NASA Technical Reports Server (NTRS)

    Janakiraman, Bhaskar; Prieditis, Armand

    1992-01-01

    A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.

  16. Designing safety into the minimally invasive surgical revolution: a commentary based on the Jacques Perissat Lecture of the International Congress of the European Association for Endoscopic Surgery.

    PubMed

    Clarke, John R

    2009-01-01

    Surgical errors with minimally invasive surgery differ from those in open surgery. Perforations are typically the result of trocar introduction or electrosurgery. Infections include bioburdens, notably enteric viruses, on complex instruments. Retained foreign objects are primarily unretrieved device fragments and lost gallstones or other specimens. Fires and burns come from illuminated ends of fiber-optic cables and from electrosurgery. Pressure ischemia is more likely with longer endoscopic surgical procedures. Gas emboli can occur. Minimally invasive surgery is more dependent on complex equipment, with high likelihood of failures. Standardization, checklists, and problem reporting are solutions for minimizing failures. The necessity of electrosurgery makes education about best electrosurgical practices important. The recording of minimally invasive surgical procedures is an opportunity to debrief in a way that improves the reliability of future procedures. Safety depends on reliability, designing systems to withstand inevitable human errors. Safe systems are characterized by a commitment to safety, formal protocols for communications, teamwork, standardization around best practice, and reporting of problems for improvement of the system. Teamwork requires shared goals, mental models, and situational awareness in order to facilitate mutual monitoring and backup. An effective team has a flat hierarchy; team members are empowered to speak up if they are concerned about problems. Effective teams plan, rehearse, distribute the workload, and debrief. Surgeons doing minimally invasive surgery have a unique opportunity to incorporate the principles of safety into the development of their discipline.

  17. The use of CVD diamond burs for ultraconservative cavity preparations: a report of two cases.

    PubMed

    Carvalho, Carlos Augusto R; Fagundes, Ticiane C; Barata, Terezinha J E; Trava-Airoldi, Vladimir Jesus; Navarro, Maria Fidela L

    2007-01-01

    During the past decades, scientific developments in cutting instruments have changed the conventional techniques used to remove caries lesions. Ultrasound emerged as an alternative for caries removal since the 1950s. However, the conventional technology for diamond powder aggregation with nickel metallic binders could not withstand ultrasonic power. Around 5 years ago, an alternative approach using chemical vapor deposition (CVD) resulted in synthetic diamond technology. CVD diamond burs are obtained with high adherence of the diamond as a unique stone on the metallic surface with excellent abrading performance. This technology allows for diamond deposition with coalescent granulation in different formats of substrates. When connected to an ultrasonic handpiece, CVD diamond burs become an option for cavity preparation, maximizing preservation of tooth structure. Potential advantages such as reduced noise, minimal damage to the gingival tissue, extended bur durability, improved proximal cavity access, reduced risk of hitting the adjacent tooth resulting from the high inclination angles, and minimal patient's risk of metal contamination. These innovative instruments also potentially eliminate some problems regarding decreased cutting efficiency of conventional diamond burs. This clinical report presents the benefits of using CVD diamond burs coupled with an ultrasonic handpiece in the treatment of incipient caries. CVD diamond burs coupled with an ultrasonic device offer a promising alternative for removal of carious lesions when ultraconservative cavity preparations are required. Additionally, this system provides a less-painful technique for caries removal, with minimal noise.

  18. Anti-Streptococcal activity of Brazilian Amazon Rain Forest plant extracts presents potential for preventive strategies against dental caries

    PubMed Central

    da SILVA, Juliana Paola Corrêa; de CASTILHO, Adriana Lígia; SARACENI, Cíntia Helena Couri; DÍAZ, Ingrit Elida Collantes; PACIÊNCIA, Mateus Luís Barradas; SUFFREDINI, Ivana Barbosa

    2014-01-01

    Caries is a global public health problem, whose control requires the introduction of low-cost treatments, such as strong prevention strategies, minimally invasive techniques and chemical prevention agents. Nature plays an important role as a source of new antibacterial substances that can be used in the prevention of caries, and Brazil is the richest country in terms of biodiversity. Objective In this study, the disk diffusion method (DDM) was used to screen over 2,000 Brazilian Amazon plant extracts against Streptococcus mutans. Material and Methods Seventeen active plant extracts were identified and fractionated. Extracts and their fractions, obtained by liquid-liquid partition, were tested in the DDM assay and in the microdilution broth assay (MBA) to determine their minimal inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs). The extracts were also subjected to antioxidant analysis by thin layer chromatography. Results EB271, obtained from Casearia spruceana, showed significant activity against the bacterium in the DDM assay (20.67±0.52 mm), as did EB1129, obtained from Psychotria sp. (Rubiaceae) (15.04±2.29 mm). EB1493, obtained from Ipomoea alba, was the only extract to show strong activity against Streptococcus mutans (0.08 mg/mL

  19. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  20. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  1. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE PAGES

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    2016-05-25

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  2. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  3. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  4. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  5. Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Matsaini; Santosa, Budi

    2018-04-01

    Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.

  6. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  7. Minimizing metastatic risk in radiotherapy fractionation schedules

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Ramakrishnan, Jagdish; Leder, Kevin

    2015-11-01

    Metastasis is the process by which cells from a primary tumor disperse and form new tumors at distant anatomical locations. The treatment and prevention of metastatic cancer remains an extremely challenging problem. This work introduces a novel biologically motivated objective function to the radiation optimization community that takes into account metastatic risk instead of the status of the primary tumor. In this work, we consider the problem of developing fractionated irradiation schedules that minimize production of metastatic cancer cells while keeping normal tissue damage below an acceptable level. A dynamic programming framework is utilized to determine the optimal fractionation scheme. We evaluated our approach on a breast cancer case using the heart and the lung as organs-at-risk (OAR). For small tumor α /β values, hypo-fractionated schedules were optimal, which is consistent with standard models. However, for relatively larger α /β values, we found the type of schedule depended on various parameters such as the time when metastatic risk was evaluated, the α /β values of the OARs, and the normal tissue sparing factors. Interestingly, in contrast to standard models, hypo-fractionated and semi-hypo-fractionated schedules (large initial doses with doses tapering off with time) were suggested even with large tumor α/β values. Numerical results indicate the potential for significant reduction in metastatic risk.

  8. Evidence of Human Health Impacts from Uncontrolled Coal Fires in Jharia, India

    NASA Astrophysics Data System (ADS)

    Dhar, U.; Balogun, A. H.; Finkelman, R.; Chakraborty, S.; Olanipekun, O.; Shaikh, W. A.

    2017-12-01

    Uncontrolled coal fires and burning coal waste piles have been reported from dozens of countries. These fires can be caused by spontaneous combustion, sparks from machinery, lightning strikes, grass or forest fires, or intentionally. Both underground and surface coal fires mobilize potentially toxic elements such as sulfur, arsenic, selenium, fluorine, lead, and mercury as well as dangerous organic compounds such as benzene, toluene, xylene, ethylbenzene and deadly gases such as CO2 and CO. Despite the serious health problems that can be caused by uncontrolled coal fires it is rather surprising that there has been so little research and documentation of their health impacts. Underground coal fires in the Jharia region of India where more than a million people reside, have been burning for 100 years. Numerous villages exist above the underground fires exposing the residents daily to dangerous emissions. Local residents near the fire affected areas do their daily chores without concern about the intensity of nearby fires. During winter children enjoy the heat of the coal fires oblivious to the potentially harmful emissions. To determine if these uncontrolled coal fires have caused health problems we developed a brief questionnaire on general health indices and administered it to residents of the Jharia region. Sixty responses were obtained from residents of two villages, one proximal to the coal fires and one about 5 miles away from the fires. The responses were statistically analyzed using SAS 9.4. It was observed that at a significance level of 5%, villagers who lived more than 5 miles away from the fires had a 98.3% decreased odds of having undesirable health outcomes. This brief survey indicates the risk posed by underground coal fires and how it contributes to the undesirable health impacts. What remains is to determine the specific health issues, what components of the emissions cause the health problems, and what can be done to minimize these problems. Collaboration between geoscientists and public health researchers are essential to assess complex geohealth issues such as those that may be caused by uncontrolled coal fires. This type of multidisciplinary collaboration must be maintained and expanded to include engineers, social scientists, and others to help minimize or avoid these problems.

  9. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  10. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  11. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  12. Cooperative Surveillance and Pursuit Using Unmanned Aerial Vehicles and Unattended Ground Sensors

    PubMed Central

    Las Fargeas, Jonathan; Kabamba, Pierre; Girard, Anouck

    2015-01-01

    This paper considers the problem of path planning for a team of unmanned aerial vehicles performing surveillance near a friendly base. The unmanned aerial vehicles do not possess sensors with automated target recognition capability and, thus, rely on communicating with unattended ground sensors placed on roads to detect and image potential intruders. The problem is motivated by persistent intelligence, surveillance, reconnaissance and base defense missions. The problem is formulated and shown to be intractable. A heuristic algorithm to coordinate the unmanned aerial vehicles during surveillance and pursuit is presented. Revisit deadlines are used to schedule the vehicles' paths nominally. The algorithm uses detections from the sensors to predict intruders' locations and selects the vehicles' paths by minimizing a linear combination of missed deadlines and the probability of not intercepting intruders. An analysis of the algorithm's completeness and complexity is then provided. The effectiveness of the heuristic is illustrated through simulations in a variety of scenarios. PMID:25591168

  13. A Solution in Search of Problems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Ferrofluids offered vast-problem solving potential. Under license for the NASA technology, Dr. Ronald Moskowitz and Dr. Ronald Rosensweig formed Ferrofluids Corporation. First problem they found a solution for was related to the manufacture of semiconductor "chips" for use in electronic systems. They developed a magnetic seal composed of ferrofluid and a magnetic circuit. Magnetic field confines the ferrofluid in the regions between the stationary elements and the rotary shaft of the seal. Result is a series of liquid barriers that totally bar passage of contaminants. Seal is virtually wear-proof and has a lifetime measured in billions of shaft revolutions. It has reduced maintenance, minimizes "downtime" of production equipment, and reduces the cost of expensive materials that had previously been lost through seal failures. Products based on ferrofluid are exclusion seals for computer disc drives and inertia dampers for stepper motors. Uses are performance-improving, failure-reducing coolants for hi-fi loudspeakers. Other applications include analytical instrumentation, medical equipment, industrial processes, silicon crystal growing furnaces, plasma processes, fusion research, visual displays, and automated machine tools.

  14. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  15. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  16. Assessing the Problem Formulation in an Integrated Assessment Model: Implications for Climate Policy Decision-Support

    NASA Astrophysics Data System (ADS)

    Garner, G. G.; Reed, P. M.; Keller, K.

    2014-12-01

    Integrated assessment models (IAMs) are often used with the intent to aid in climate change decisionmaking. Numerous studies have analyzed the effects of parametric and/or structural uncertainties in IAMs, but uncertainties regarding the problem formulation are often overlooked. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the problem formulation. The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decisionmakers, however, may be concerned with a broader range of values and preferences that are not captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing both (ii) the costs of abatement and (iii) the damages due to climate change. We derive a set of Pareto-optimal solutions over which decisionmakers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  17. How should functional imaging of patients with disorders of consciousness contribute to their clinical rehabilitation needs?

    PubMed Central

    Laureys, Steven; Giacino, Joseph T.; Schiff, Nicholas D.; Schabus, Manuel; Owen, Adrian M.

    2010-01-01

    Purpose of review We discuss the problems of evidence-based neurorehabilitation in disorders of consciousness, and recent functional neuroimaging data obtained in the vegetative state and minimally conscious state. Recent findings Published data are insufficient to make recommendations for or against any of the neurorehabilitative treatments in vegetative state and minimally conscious state patients. Electrophysiological and functional imaging studies have been shown to be useful in measuring residual brain function in noncommunicative brain-damaged patients. Despite the fact that such studies could in principle allow an objective quantification of the putative cerebral effect of rehabilitative treatment in the vegetative state and minimally conscious state, they have so far not been used in this context. Summary Without controlled studies and careful patient selection criteria it will not be possible to evaluate the potential of therapeutic interventions in disorders of consciousness. There also is a need to elucidate the neurophysiological effects of such treatments. Integration of multimodal neuroimaging techniques should eventually improve our ability to disentangle differences in outcome on the basis of underlying mechanisms and better guide our therapeutic options in the challenging patient populations encountered following severe acute brain damage. PMID:17102688

  18. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  19. Fabrication of sinterable silicon nitride by injection molding

    NASA Technical Reports Server (NTRS)

    Quackenbush, C. L.; French, K.; Neil, J. T.

    1982-01-01

    Transformation of structural ceramics from the laboratory to production requires development of near net shape fabrication techniques which minimize finish grinding. One potential technique for producing large quantities of complex-shaped parts at a low cost, and microstructure of sintered silicon nitride fabricated by injection molding is discussed and compared to data generated from isostatically dry-pressed material. Binder selection methodology, compounding of ceramic and binder components, injection molding techniques, and problems in binder removal are discussed. Strength, oxidation resistance, and microstructure of sintered silicon nitride fabricated by injection molding is discussed and compared to data generated from isostatically dry-pressed material.

  20. Potential Impacts Related to the Air Training Command Realignments. Institutional Characteristics, Transportation, Civilian Community Utilities, Land Use for Craig AFB, Alabama, Webb AFB, Texas, Columbus AFB, Mississippi, Laughlin AFB, Texas, Reese AFB, Texas, Vance AFB, Oklahoma

    DTIC Science & Technology

    1976-12-30

    response and 70.4 percent civilian response, extend to 100 percent population, 1975; Counity Census Data, General Social =0 Economic Characteristics. 25...transportation problems exist and the alternative action is not txpected to disturb the situation in the future. 86 On-Base (AFERN 4.4.1.3) No major problema of...values could increase in specific residential areas that are perceived as meeting certain economic and social needs where current 7acancies are minimal

  1. Monitoring the evolving land use patterns using remote sensing

    NASA Technical Reports Server (NTRS)

    Goehring, D. R.

    1971-01-01

    The urbanization of Walnut Valley from 1953-71 prompted land use change from intensive von Thunen market-oriented patterns to extensive, disinvested, production-factor-minimized patterns. Shortrun, interim land use planning, has allowed agriculture to persist but only in the form of barley farming and grazing. Aerial photography used synoptically recorded six periods of land use change that bracketed dates before and after the freeway was announced and built. Interpretations of these changes help recognize potential conversions to urban uses which allow guidelines to be established that deal with rural-urban transition problems before they arise.

  2. Half-and-Half Palatoplasty.

    PubMed

    Han, Hyun Ho; Kang, In Sook; Rhie, Jong Won

    2014-08-01

    A 14-month-old child was diagnosed with a Veau Class II cleft palate. Von Langenbeck palatoplasty was performed for the right palate, and V-Y pushback palatoplasty was performed for the left palate. The child did not have a special problem during the surgery, and the authors were able to elongate the cleft by 10 mm. Contrary to preoperative concerns regarding the hybrid use of palatoplasties, the uvula and midline incisions remained balanced in the middle. The authors named this combination method "half-and-half palatoplasty" and plan to conduct a long-term follow up study as a potential solution that minimizes the complications of palatoplasty.

  3. A case study of cost-efficient staffing under annualized hours.

    PubMed

    van der Veen, Egbert; Hans, Erwin W; Veltman, Bart; Berrevoets, Leo M; Berden, Hubert J J M

    2015-09-01

    We propose a mathematical programming formulation that incorporates annualized hours and shows to be very flexible with regard to modeling various contract types. The objective of our model is to minimize salary cost, thereby covering workforce demand, and using annualized hours. Our model is able to address various business questions regarding tactical workforce planning problems, e.g., with regard to annualized hours, subcontracting, and vacation planning. In a case study for a Dutch hospital two of these business questions are addressed, and we demonstrate that applying annualized hours potentially saves up to 5.2% in personnel wages annually.

  4. Evolving prosocial and sustainable neighborhoods and communities.

    PubMed

    Biglan, Anthony; Hinds, Erika

    2009-01-01

    In this review, we examine randomized controlled trials of community interventions to affect health. The evidence supports the efficacy of community interventions for preventing tobacco, alcohol, and other drug use; several recent trials have shown the benefits of community interventions for preventing multiple problems of young people, including antisocial behavior. However, the next generation of community intervention research needs to reflect more fully the fact that most psychological and behavioral problems of humans are interrelated and result from the same environmental conditions. The evidence supports testing new comprehensive community interventions that focus on increasing nurturance in communities. Nurturing communities will be ones in which families, schools, neighborhoods, and workplaces (a) minimize biologically and socially toxic events, (b) richly reinforce prosocial behavior, and (c) foster psychological acceptance. Such interventions also have the potential to make neighborhoods more sustainable.

  5. Minimum relative entropy distributions with a large mean are Gaussian

    NASA Astrophysics Data System (ADS)

    Smerlak, Matteo

    2016-12-01

    Entropy optimization principles are versatile tools with wide-ranging applications from statistical physics to engineering to ecology. Here we consider the following constrained problem: Given a prior probability distribution q , find the posterior distribution p minimizing the relative entropy (also known as the Kullback-Leibler divergence) with respect to q under the constraint that mean (p ) is fixed and large. We show that solutions to this problem are approximately Gaussian. We discuss two applications of this result. In the context of dissipative dynamics, the equilibrium distribution of a Brownian particle confined in a strong external field is independent of the shape of the confining potential. We also derive an H -type theorem for evolutionary dynamics: The entropy of the (standardized) distribution of fitness of a population evolving under natural selection is eventually increasing in time.

  6. Evolving Prosocial and Sustainable Neighborhoods and Communities

    PubMed Central

    Biglan, Anthony; Hinds, Erika

    2008-01-01

    In this chapter, we review randomized controlled trials of community interventions to affect health. The evidence supports the efficacy of community interventions for preventing tobacco, alcohol, and other drug use; several recent trials have shown the benefits of community interventions for preventing multiple problems of young people, including antisocial behavior. However, the next generation of community intervention research needs to reflect more fully the fact that most psychological and behavioral problems of humans are inter-related and result from the same environmental conditions. The evidence supports testing a new set of comprehensive community interventions that focus on increasing nurturance in communities. Nurturing communities will be ones in which families, schools, neighborhoods, and workplaces (a) minimize biologically and socially toxic events, (b) richly reinforce prosocial behavior, and (c) foster psychological acceptance. Such interventions also have the potential to make neighborhoods more sustainable. PMID:19327029

  7. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  8. Geothermal Energy: Prospects and Problems

    ERIC Educational Resources Information Center

    Ritter, William W.

    1973-01-01

    An examination of geothermal energy as a means of increasing the United States power resources with minimal pollution problems. Developed and planned geothermal-electric power installations around the world, capacities, installation dates, etc., are reviewed. Environmental impact, problems, etc. are discussed. (LK)

  9. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less

  10. Relativized problems with abelian phase group in topological dynamics.

    PubMed

    McMahon, D

    1976-04-01

    Let (X, T) be the equicontinuous minimal transformation group with X = pi(infinity)Z(2), the Cantor group, and S = [unk](infinity)Z(2) endowed with the discrete topology acting on X by right multiplication. For any countable group T we construct a function F:X x S --> T such that if (Y, T) is a minimal transformation group, then (X x Y, S) is a minimal transformation group with the action defined by (x, y)s = [xs, yF(x, s)]. If (W, T) is a minimal transformation group and varphi:(Y, T) --> (W, T) is a homomorphism, then identity x varphi:(X x Y, S) --> (X x W, S) is a homomorphism and has many of the same properties that varphi has. For this reason, one may assume that the phase group is abelian (or S) without loss of generality for many relativized problems in topological dynamics.

  11. Recruitment and Retention of Older Adults in Aging Research

    PubMed Central

    Mody, Lona; Miller, Douglas K.; McGloin, Joanne M.; Div, M; Freeman, Marcie; Marcantonio, Edward R.; Magaziner, Jay; Studenski, Stephanie

    2009-01-01

    Older adults continue to be underrepresented in clinical research despite their burgeoning population in the United States and worldwide. Physicians often propose treatment plans for older adults based on data from studies involving primarily younger, more-functional, healthier participants. Major barriers to recruitment of older adults in aging research relate to their substantial health problems, social and cultural barriers, and potentially impaired capacity to provide informed consent. Institutionalized older adults offer another layer of complexity that requires cooperation from the institutions to participate in research activities. This paper provides study recruitment and retention techniques and strategies to address concerns and overcome barriers to older adult participation in clinical research. Key approaches include early in-depth planning; minimizing exclusion criteria; securing cooperation from all interested parties; using advisory boards, timely screening, identification, and approach of eligible patients; carefully reviewing the benefit:risk ratio to be sure it is appropriate; and employing strategies to ensure successful retention across the continuum of care. Targeting specific strategies to the condition, site, and population of interest and anticipating potential problems and promptly employing predeveloped contingency plans are keys to effective recruitment and retention. PMID:19093934

  12. On the Duffin-Kemmer-Petiau equation with linear potential in the presence of a minimal length

    NASA Astrophysics Data System (ADS)

    Chargui, Yassine

    2018-04-01

    We point out an erroneous handling in the literature regarding solutions of the (1 + 1)-dimensional Duffin-Kemmer-Petiau equation with linear potentials in the context of quantum mechanics with minimal length. Furthermore, using Brau's approach, we present a perturbative treatment of the effect of the minimal length on bound-state solutions when a Lorentz-scalar linear potential is applied.

  13. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  14. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  15. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  16. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  17. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  18. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  19. Design and optimal control of multi-spacecraft interferometric imaging systems

    NASA Astrophysics Data System (ADS)

    Chakravorty, Suman

    The objective of the proposed NASA Origins mission, Planet Imager, is the high-resolution imaging of exo-solar planets and similar high resolution astronomical imaging applications. The imaging is to be accomplished through the design of multi-spacecraft interferometric imaging systems (MSIIS). In this dissertation, we study the design of MSIIS. Assuming that the ultimate goal of imaging is the correct classification of the formed images, we formulate the design problem as minimization of some resource utilization of the system subject to the constraint that the probability of misclassification of any given image is below a pre-specified level. We model the process of image formation in an MSIIS and show that the Modulation Transfer function of and the noise corrupting the synthesized optical instrument are dependent on the trajectories of the constituent spacecraft. Assuming that the final goal of imaging is the correct classification of the formed image based on a given feature (a real valued function of the image variable), and a threshold on the feature, we find conditions on the noise corrupting the measurements such that the probability of misclassification is below some pre-specified level. These conditions translate into constraints on the trajectories of the constituent spacecraft. Thus, the design problem reduces to minimizing some resource utilization of the system, while satisfying the constraints placed on the system by the imaging requirements. We study the problem of designing minimum time maneuvers for MSIIS. We transform the time minimization problem into a "painting problem". The painting problem involves painting a large disk with smaller paintbrushes (coverage disks). We show that spirals form the dominant set for the solution to the painting problem. We frame the time minimization in the subspace of spirals and obtain a bilinear program, the double pantograph problem, in the design parameters of the spiral, the spiraling rate and the angular rate. We show that the solution of this problem is given by the solution to two associated linear programs. We illustrate our results through a simulation where the banded appearance of a fictitious exo-solar planet at a distance of 8 parsecs is detected.

  20. What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective

    ERIC Educational Resources Information Center

    Wareham, Todd; Evans, Patricia; van Rooij, Iris

    2011-01-01

    Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…

  1. Excitation spectrum of a mixture of two Bose gases confined in a ring potential with interaction asymmetry

    NASA Astrophysics Data System (ADS)

    Roussou, A.; Smyrnakis, J.; Magiropoulos, M.; Efremidis, N. K.; Kavoulakis, G. M.; Sandin, P.; Ögren, M.; Gulliksson, M.

    2018-04-01

    We study the rotational properties of a two-component Bose–Einstein condensed gas of distinguishable atoms which are confined in a ring potential using both the mean-field approximation, as well as the method of diagonalization of the many-body Hamiltonian. We demonstrate that the angular momentum may be given to the system either via single-particle, or ‘collective’ excitation. Furthermore, despite the complexity of this problem, under rather typical conditions the dispersion relation takes a remarkably simple and regular form. Finally, we argue that under certain conditions the dispersion relation is determined via collective excitation. The corresponding many-body state, which, in addition to the interaction energy minimizes also the kinetic energy, is dictated by elementary number theory.

  2. Sources of method bias in social science research and recommendations on how to control it.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P

    2012-01-01

    Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

  3. Fundamentals of pain management in wound care.

    PubMed

    Coulling, Sarah

    Under-treated pain can result in a number of potentially serious sequelae (Australian and New Zealand College of Anaesthetists, 2006), including delayed mobilization and recovery, cardiac complications, thromboses, pulmonary complications, delayed healing, psychosocial problems and chronic pain syndromes. This article considers pain management in the context of painful wounds. An international comparative survey on wound pain (European Wound Management Association, 2002) found that practitioners in the wound care community tend to focus on healing processes rather than the patient's total pain experience involving an accurate pain assessment and selection of an appropriate pain management strategy. Procedural pain with dressing removal and cleansing caused the greatest concerns. An overview of simple, evidence-based drug and non-drug techniques is offered as potential strategies to help minimize the experience of pain.

  4. An update on applications of nanostructured drug delivery systems in cancer therapy: a review.

    PubMed

    Aberoumandi, Seyed Mohsen; Mohammadhosseini, Majid; Abasi, Elham; Saghati, Sepideh; Nikzamir, Nasrin; Akbarzadeh, Abolfazl; Panahi, Yunes; Davaran, Soodabeh

    2017-09-01

    Cancer is a main public health problem that is known as a malignant tumor and out-of-control cell growth, with the potential to assault or spread to other parts of the body. Recently, remarkable efforts have been devoted to develop nanotechnology to improve the delivery of anticancer drug to tumor tissue as minimizing its distribution and toxicity in healthy tissue. Nanotechnology has been extensively used in the advance of new strategies for drug delivery and cancer therapy. Compared to customary drug delivery systems, nano-based drug delivery method has greater potential in different areas, like multiple targeting functionalization, in vivo imaging, extended circulation time, systemic control release, and combined drug delivery. Nanofibers are used for different medical applications such as drug delivery systems.

  5. Principles for problem aggregation and assignment in medium scale multiprocessors

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.

  6. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  7. Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra

    The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.

  8. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  9. Graphical approach for multiple values logic minimization

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  10. Time Investment in Drug Supply Problems by Flemish Community Pharmacies

    PubMed Central

    De Weerdt, Elfi; Simoens, Steven; Casteels, Minne; Huys, Isabelle

    2017-01-01

    Introduction: Drug supply problems are a known problem for pharmacies. Community and hospital pharmacies do everything they can to minimize impact on patients. This study aims to quantify the time spent by Flemish community pharmacies on drug supply problems. Materials and Methods: During 18 weeks, employees of 25 community pharmacies filled in a template with the total time spent on drug supply problems. The template stated all the steps community pharmacies could undertake to manage drug supply problems. Results: Considering the median over the study period, the median time spent on drug supply problems was 25 min per week, with a minimum of 14 min per week and a maximum of 38 min per week. After calculating the median of each pharmacy, large differences were observed between pharmacies: about 25% spent less than 15 min per week and one-fifth spent more than 1 h per week. The steps on which community pharmacists spent most time are: (i) “check missing products from orders,” (ii) “contact wholesaler/manufacturers regarding potential drug shortages,” and (iii) “communicating to patients.” These three steps account for about 50% of the total time spent on drug supply problems during the study period. Conclusion: Community pharmacies spend about half an hour per week on drug supply problems. Although 25 min per week does not seem that much, the time spent is not delineated and community pharmacists are constantly confronted with drug supply problems. PMID:28878679

  11. Flow-injection analysis with electrochemical detection of reduced nicotinamide adenine dinucleotide using 2,6-dichloroindophenol as a redox coupling agent.

    PubMed

    Tang, H T; Hajizadeh, K; Halsall, H B; Heineman, W R

    1991-01-01

    The determination of reduced nicotinamide adenine dinucleotide (NADH) by electrochemical oxidation requires a more positive potential than is predicted by the formal reduction potential for the NAD+/NADH couple. This problem is alleviated by use of 2,6-dichloroindophenol (DCIP) as a redox coupling agent for NADH. The electrochemical characteristics of DCIP at the glassy carbon electrode are examined by cyclic voltammetry and hydrodynamic voltammetry. NADH is determined by reaction with DCIP to form NAD+ and DCIPH2. DCIPH2 is then quantitated by flow-injection analysis with electrochemical detection by oxidation at a detector potential of +0.25 V at pH 7. NADH is determined over a linear range of 0.5 to 200 microM and with a detection limit of 0.38 microM. The lower detection potential for DCIPH2 compared to NADH helps to minimize interference from oxidizable components in serum samples.

  12. Collective intelligence for control of distributed dynamical systems

    NASA Astrophysics Data System (ADS)

    Wolpert, D. H.; Wheeler, K. R.; Tumer, K.

    2000-03-01

    We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, The American Economic Review, 84 (1994) 406; D. Challet and Y. C. Zhang, Physica A, 256 (1998) 514). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not "work at cross purposes", in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.

  13. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  14. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  15. Optimal UAS Assignments and Trajectories for Persistent Surveillance and Data Collection from a Wireless Sensor Network

    DTIC Science & Technology

    2015-12-24

    minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution

  16. Thermal Stability of Al2O3/Silicone Composites as High-Temperature Encapsulants

    NASA Astrophysics Data System (ADS)

    Yao, Yiying

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  17. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  18. Internet-based interventions for cancer-related distress: exploring the experiences of those whose needs are not met.

    PubMed

    Gorlick, Amanda; Bantum, Erin O'Carroll; Owen, Jason E

    2014-04-01

    Low levels of engagement in Internet-based interventions are common. Understanding users' experiences with these interventions is a key to improving efficacy. Although qualitative methods are well-suited for this purpose, few qualitative studies have been conducted in this area. In the present study, we assessed experiences with an Internet-based intervention among cancer survivors who made minimal use of the intervention. Semi-structured interviews were conducted with 25 cancer survivors who were minimally engaged (i.e., spent around 1 h total on website) with the online intervention, health-space.net. The intervention was a 12-week, facilitated support group with social and informational components. Interviews were analyzed using an interpretive descriptive design. Three broad categories, consisting of 18 specific themes, were identified from the interviews, which included connecting with similar others, individual expectations, and problems with the site (Κ = 0.88). The 'similar others' category reflected the significance of interacting with relatable survivors (i.e., same cancer type), the 'individual expectations' category reflected the significance of participants' expectations about using online interventions (i.e., personally relevant information), and the 'problems with the site' category reflected the significance of study procedures (i.e., website structure). The data indicate that minimally engaged participants have high variability regarding their needs and preferences for Internet-based interventions. Using qualitative methodologies to identify and incorporate these needs into the next generation of interventions has the potential to increase engagement and outcomes. The current study provides a foundation for future research to characterize survivors' needs and offer suggestions for better meeting these needs. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Coping with coal quality impacts on power plant operation and maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatt, R.

    1998-12-31

    The electric power industry is rapidly changing due to deregulation. The author was present one hot day in June of this year, when a southeastern utility company was selling electricity for $5,000.00 per megawatt with $85.00 cost. Typical power cost range from the mid teens at night to about $30.00 on a normal day. The free market place will challenge the power industry in many ways. Fuel is the major cost in electric power. In a regulated industry the cost of fuel was passed on to the customers. Fuels were chosen to minimize problems such as handling, combustion, ash depositsmore » and other operational and maintenance concerns. Tight specifications were used to eliminate or minimize coals that caused problems. These tight specifications raised the price of fuel by minimizing competition. As the power stations become individual profit centers, plant management must take a more proactive role in fuel selection. Understanding how coal quality impacts plant performance and cost, allows better fuel selection decisions. How well plants take advantage of their knowledge may determine whether they will be able to compete in a free market place. The coal industry itself can provide many insights on how to survive in this type of market. Coal mines today must remain competitive or be shut down. The consolidation of the coal industry indicates the trends that can occur in a competitive market. These trends have already started, and will continue in the utility industry. This paper will discuss several common situations concerning coal quality and potential solutions for the plant to consider. All these examples have mill maintenance and performance issues in common. This is indicative of how important pulverizers are to the successful operation of a power plant.« less

  20. Minimal models of compact symplectic semitoric manifolds

    NASA Astrophysics Data System (ADS)

    Kane, D. M.; Palmer, J.; Pelayo, Á.

    2018-02-01

    A symplectic semitoric manifold is a symplectic 4-manifold endowed with a Hamiltonian (S1 × R) -action satisfying certain conditions. The goal of this paper is to construct a new symplectic invariant of symplectic semitoric manifolds, the helix, and give applications. The helix is a symplectic analogue of the fan of a nonsingular complete toric variety in algebraic geometry, that takes into account the effects of the monodromy near focus-focus singularities. We give two applications of the helix: first, we use it to give a classification of the minimal models of symplectic semitoric manifolds, where "minimal" is in the sense of not admitting any blowdowns. The second application is an extension to the compact case of a well known result of Vũ Ngọc about the constraints posed on a symplectic semitoric manifold by the existence of focus-focus singularities. The helix permits to translate a symplectic geometric problem into an algebraic problem, and the paper describes a method to solve this type of algebraic problem.

  1. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  2. Free-energy minimization and the dark-room problem.

    PubMed

    Friston, Karl; Thornton, Christopher; Clark, Andy

    2012-01-01

    Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the "free-energy minimization" formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b - see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the "Dark-Room Problem." Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington's Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).

  3. On post-inflation validity of perturbation theory in Horndeski scalar-tensor models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germani, Cristiano; Kudryashova, Nina; Watanabe, Yuki, E-mail: germani@icc.ub.edu, E-mail: nina.kudryashova@campus.lmu.de, E-mail: yuki.watanabe@nat.gunma-ct.ac.jp

    By using the newtonian gauge, we re-confirm that, as in the minimal case, the re-scaled Mukhanov-Sasaki variable is conserved leading to a constraint equation for the Newtonian potential. However, conversely to the minimal case, in Horndeski theories, the super-horizon Newtonian potential can potentially grow to very large values after inflation exit. If that happens, inflationary predictability is lost during the oscillating period. When this does not happen, the perturbations generated during inflation can be standardly related to the CMB, if the theory chosen is minimal at low energies. As a concrete example, we analytically and numerically discuss the new Higgsmore » inflationary case. There, the Inflaton is the Higgs boson that is non-minimally kinetically coupled to gravity. During the high-energy part of the post-inflationary oscillations, the system is anisotropic and the Newtonian potential is largely amplified. Thanks to the smallness of today's amplitude of curvature perturbations, however, the system stays in the linear regime, so that inflationary predictions are not lost. At low energies, when the system relaxes to the minimal case, the anisotropies disappear and the Newtonian potential converges to a constant value. We show that the constant value to which the Newtonian potential converges is related to the frozen part of curvature perturbations during inflation, precisely like in the minimal case.« less

  4. Percutaneous Repair Technique for Acute Achilles Tendon Rupture with Assistance of Kirschner Wire.

    PubMed

    He, Ze-yang; Chai, Ming-xiang; Liu, Yue-ju; Zhang, Xiao-ran; Zhang, Tao; Song, Lian-xin; Ren, Zhi-xin; Wu, Xi-rui

    2015-11-01

    The aim of this study is to introduce a self-designed, minimally invasive technique for repairing an acute Achilles tendon rupture percutaneously. Comparing with the traditional open repair, the new technique provides obvious advantages of minimized operation-related lesions, fewer wound complications as well as a higher healing rate. However, a percutaneous technique without direct vision may be criticized by its insufficient anastomosis of Achilles tendon and may also lead to the lengthening of the Achilles tendon and a reduction in the strength of the gastrocnemius. To address the potential problems, we have improved our technique using a percutaneous Kirschner wire leverage process before suturing, which can effectively recover the length of the Achilles tendon and ensure the broken ends are in tight contact. With this improvement in technique, we have great confidence that it will become the treatment of choice for acute Achilles tendon ruptures. © 2015 Chinese Orthopaedic Association and Wiley Publishing Asia Pty Ltd.

  5. Microalgae biorefineries: The Brazilian scenario in perspective.

    PubMed

    Brasil, B S A F; Silva, F C P; Siqueira, F G

    2017-10-25

    Biorefineries have the potential to meet a significant part of the growing demand for energy, fuels, chemicals and materials worldwide. Indeed, the bio-based industry is expected to play a major role in energy security and climate change mitigation during the 21th century. Despite this, there are challenges related to resource consumption, processing optimization and waste minimization that still need to be overcome. In this context, microalgae appear as a promising non-edible feedstock with advantages over traditional land crops, such as high productivity, continuous harvesting throughout the year and minimal problems regarding land use. Importantly, both cultivation and microalgae processing can take place at the same site, which increases the possibilities for process integration and a reduction in logistic costs at biorefinery facilities. This review describes the actual scenario for microalgae biorefineries integration to the biofuels and petrochemical industries in Brazil, while highlighting the major challenges and recent advances in microalgae large-scale production. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Artificial Intelligence Based Control Power Optimization on Tailless Aircraft. [ARMD Seedling Fund Phase I

    NASA Technical Reports Server (NTRS)

    Gern, Frank; Vicroy, Dan D.; Mulani, Sameer B.; Chhabra, Rupanshi; Kapania, Rakesh K.; Schetz, Joseph A.; Brown, Derrell; Princen, Norman H.

    2014-01-01

    Traditional methods of control allocation optimization have shown difficulties in exploiting the full potential of controlling large arrays of control devices on innovative air vehicles. Artificial neutral networks are inspired by biological nervous systems and neurocomputing has successfully been applied to a variety of complex optimization problems. This project investigates the potential of applying neurocomputing to the control allocation optimization problem of Hybrid Wing Body (HWB) aircraft concepts to minimize control power, hinge moments, and actuator forces, while keeping system weights within acceptable limits. The main objective of this project is to develop a proof-of-concept process suitable to demonstrate the potential of using neurocomputing for optimizing actuation power for aircraft featuring multiple independently actuated control surfaces. A Nastran aeroservoelastic finite element model is used to generate a learning database of hinge moment and actuation power characteristics for an array of flight conditions and control surface deflections. An artificial neural network incorporating a genetic algorithm then uses this training data to perform control allocation optimization for the investigated aircraft configuration. The phase I project showed that optimization results for the sum of required hinge moments are improved by more than 12% over the best Nastran solution by using the neural network optimization process.

  7. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  8. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  9. Improving the Flexibility of Optimization-Based Decision Aiding Frameworks for Integrated Water Resource Management

    NASA Astrophysics Data System (ADS)

    Guillaume, J. H.; Kasprzyk, J. R.

    2013-12-01

    Deep uncertainty refers to situations in which stakeholders cannot agree on the full suite of risks for their system or their probabilities. Additionally, systems are often managed for multiple, conflicting objectives such as minimizing cost, maximizing environmental quality, and maximizing hydropower revenues. Many objective analysis (MOA) uses a quantitative model combined with evolutionary optimization to provide a tradeoff set of potential solutions to a planning problem. However, MOA is often performed using a single, fixed problem conceptualization. Focus on development of a single formulation can introduce an "inertia" into the problem solution, such that issues outside the initial formulation are less likely to ever be addressed. This study uses the Iterative Closed Question Methodology (ICQM) to continuously reframe the optimization problem, providing iterative definition and reflection for stakeholders. By using a series of directed questions to look beyond a problem's existing modeling representation, ICQM seeks to provide a working environment within which it is easy to modify the motivating question, assumptions, and model identification in optimization problems. The new approach helps identify and reduce bottle-necks introduced by properties of both the simulation model and optimization approach that reduce flexibility in generation and evaluation of alternatives. It can therefore help introduce new perspectives on the resolution of conflicts between objectives. The Lower Rio Grande Valley portfolio planning problem is used as a case study.

  10. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  11. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  12. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  13. What energy functions can be minimized via graph cuts?

    PubMed

    Kolmogorov, Vladimir; Zabih, Ramin

    2004-02-01

    In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.

  14. Improving monoclonal antibody selection and engineering using measurements of colloidal protein interactions

    PubMed Central

    Geng, Steven B.; Cheung, Jason K.; Narasimhan, Chakravarthy; Shameem, Mohammed; Tessier, Peter M.

    2014-01-01

    A limitation of using monoclonal antibodies as therapeutic molecules is their propensity to associate with themselves and/or with other molecules via non-affinity (colloidal) interactions. This can lead to a variety of problems ranging from low solubility and high viscosity to off-target binding and fast antibody clearance. Measuring such colloidal interactions is challenging given that they are weak and potentially involve diverse target molecules. Nevertheless, assessing these weak interactions – especially during early antibody discovery and lead candidate optimization – is critical to preventing problems that can arise later in the development process. Here we review advances in developing and implementing sensitive methods for measuring antibody colloidal interactions as well as using these measurements for guiding antibody selection and engineering. These systematic efforts to minimize non-affinity interactions are expected to yield more effective and stable monoclonal antibodies for diverse therapeutic applications. PMID:25209466

  15. Control theory based airfoil design using the Euler equations

    NASA Technical Reports Server (NTRS)

    Jameson, Antony; Reuther, James

    1994-01-01

    This paper describes the implementation of optimization techniques based on control theory for airfoil design. In our previous work it was shown that control theory could be employed to devise effective optimization procedures for two-dimensional profiles by using the potential flow equation with either a conformal mapping or a general coordinate system. The goal of our present work is to extend the development to treat the Euler equations in two-dimensions by procedures that can readily be generalized to treat complex shapes in three-dimensions. Therefore, we have developed methods which can address airfoil design through either an analytic mapping or an arbitrary grid perturbation method applied to a finite volume discretization of the Euler equations. Here the control law serves to provide computationally inexpensive gradient information to a standard numerical optimization method. Results are presented for both the inverse problem and drag minimization problem.

  16. Network model of project "Lean Production"

    NASA Astrophysics Data System (ADS)

    Khisamova, E. D.

    2018-05-01

    Economical production implies primarily new approaches to culture of management and organization of production and offers a set of tools and techniques that allows reducing losses significantly and making the process cheaper and faster. Economical production tools are simple solutions that allow one to see opportunities for improvement of all aspects of the business, to reduce losses significantly, to constantly improve the whole spectrum of business processes, to increase significantly the transparency and manageability of the organization, to take advantage of the potential of each employee of the company, to increase competitiveness, and to obtain significant economic benefits without making large financial expenditures. Each of economical production tools solves a specific part of the problems, and only application of their combination will allow one to solve the problem or minimize it to acceptable values. The research of the governance process project "Lean Production" permitted studying the methods and tools of lean production and developing measures for their improvement.

  17. Evaluation of the vehicle state with vibration-based diagnostics methods

    NASA Astrophysics Data System (ADS)

    Gai, V. E.; Polyakov, I. V.; Krasheninnikov, M. S.; Koshurina, A. A.; Dorofeev, R. A.

    2017-02-01

    Timely detection of a trouble in the mechanisms work is a guarantee of the stable operation of the entire machine complex. It allows minimizing unexpected losses, and avoiding any injuries inflicted on working people. The solution of the problem is the most important for vehicles and machines, working in remote areas of the infrastructure. All-terrain vehicles can be referred to such type of transport. The potential object of application of the described methodology is the multipurpose rotary-screw amphibious vehicle for rescue; reconnaissance; transport and technological operations. At the present time, there is no information on the use of these kinds of systems in ground-based vehicles. The present paper is devoted to the state estimation of a mechanism based on the analysis of vibration signals produced by the mechanism, in particular, the vibration signals of rolling bearings. The theory of active perception was used for the solution of the problem of the state estimation.

  18. Short-Term State Forecasting-Based Optimal Voltage Regulation in Distribution Systems: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui; Jiang, Huaiguang; Zhang, Yingchen

    2017-05-17

    A novel short-term state forecasting-based optimal power flow (OPF) approach for distribution system voltage regulation is proposed in this paper. An extreme learning machine (ELM) based state forecaster is developed to accurately predict system states (voltage magnitudes and angles) in the near future. Based on the forecast system states, a dynamically weighted three-phase AC OPF problem is formulated to minimize the voltage violations with higher penalization on buses which are forecast to have higher voltage violations in the near future. By solving the proposed OPF problem, the controllable resources in the system are optimally coordinated to alleviate the potential severemore » voltage violations and improve the overall voltage profile. The proposed approach has been tested in a 12-bus distribution system and simulation results are presented to demonstrate the performance of the proposed approach.« less

  19. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  20. Nursing responsibilities and social justice: an analysis in support of disciplinary goals.

    PubMed

    Grace, Pamela J; Willis, Danny G

    2012-01-01

    Social justice is asserted as a responsibility of the nursing profession. However, a reliable conception of social justice that can undergird practice, research, education, and policy endeavors has proved elusive. We discuss this as a problem for the profession and propose Powers and Faden's model of social justice as useful for nursing purposes because of its focus on exploring and rectifying underlying causes of injustice as they lie within the fabric of society. Their model asserts 6 essential dimensions of well-being as universal human needs. These dimensions are interrelated and nonhierarchical. A serious deficiency in any one affects other dimensions and interferes with the ability to experience "a minimally decent life." The model is applied to the problem of child abuse and the effects of its aftermath on well-being as an example of its potential for structuring nursing knowledge development, practice, and policy initiatives. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Tachyon field non-minimally coupled to massive neutrino matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Safia; Myrzakulov, Nurgissa; Myrzakulov, R., E-mail: safia@ctp-jamia.res.in, E-mail: nmyrzakulov@gmail.com, E-mail: rmyrzakulov@gmail.com

    2016-07-01

    In this paper, we consider rolling tachyon, with steep run-away type of potentials non-minimally coupled to massive neutrino matter. The coupling dynamically builds up at late times as neutrino matter turns non-relativistic. In case of scaling and string inspired potentials, we have shown that non-minimal coupling leads to minimum in the field potential. Given a suitable choice of model parameters, it is shown to give rise to late-time acceleration with the desired equation of state.

  2. Analytical solution of Schrödinger equation in minimal length formalism for trigonometric potential using hypergeometry method

    NASA Astrophysics Data System (ADS)

    Nurhidayati, I.; Suparmi, A.; Cari, C.

    2018-03-01

    The Schrödinger equation has been extended by applying the minimal length formalism for trigonometric potential. The wave function and energy spectra were used to describe the behavior of subatomic particle. The wave function and energy spectra were obtained by using hypergeometry method. The result showed that the energy increased by the increasing both of minimal length parameter and the potential parameter. The energy were calculated numerically using MatLab.

  3. [The present and future state of minimized extracorporeal circulation].

    PubMed

    Meng, Fan; Yang, Ming

    2013-05-01

    Minimized extracorporeal circulation improved in the postoperative side effects of conventional extracorporeal circulation is a kind of new extracorporeal circulation. This paper introduces the principle, characteristics, applications and related research of minimized extracorporeal circulation. For the problems of systemic inflammatory response syndrome and limited assist time, the article proposes three development direction including system miniaturization and integration, pulsatile blood pump and the adaptive control by human parameter identification.

  4. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.

    PubMed

    Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen

    2017-09-04

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  5. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  6. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    PubMed

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  7. Geopolymer for protective coating of transportation infrastructures.

    DOT National Transportation Integrated Search

    1998-09-01

    Surface deterioration of exposed transportation structures is a major problem. In most cases, : surface deterioration could lead to structural problems because of the loss of cover and ensuing : reinforcement corrosion. To minimize the deterioration,...

  8. Continued research on selected parameters to minimize community annoyance from airplane noise

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1981-01-01

    Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.

  9. A bottom-up approach to the strong CP problem

    NASA Astrophysics Data System (ADS)

    Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.

    2018-05-01

    The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.

  10. Multi-Objective Flight Control for Drag Minimization and Load Alleviation of High-Aspect Ratio Flexible Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Ting, Eric; Chaparro, Daniel; Drew, Michael; Swei, Sean

    2017-01-01

    As aircraft wings become much more flexible due to the use of light-weight composites material, adverse aerodynamics at off-design performance can result from changes in wing shapes due to aeroelastic deflections. Increased drag, hence increased fuel burn, is a potential consequence. Without means for aeroelastic compensation, the benefit of weight reduction from the use of light-weight material could be offset by less optimal aerodynamic performance at off-design flight conditions. Performance Adaptive Aeroelastic Wing (PAAW) technology can potentially address these technical challenges for future flexible wing transports. PAAW technology leverages multi-disciplinary solutions to maximize the aerodynamic performance payoff of future adaptive wing design, while addressing simultaneously operational constraints that can prevent the optimal aerodynamic performance from being realized. These operational constraints include reduced flutter margins, increased airframe responses to gust and maneuver loads, pilot handling qualities, and ride qualities. All of these constraints while seeking the optimal aerodynamic performance present themselves as a multi-objective flight control problem. The paper presents a multi-objective flight control approach based on a drag-cognizant optimal control method. A concept of virtual control, which was previously introduced, is implemented to address the pair-wise flap motion constraints imposed by the elastomer material. This method is shown to be able to satisfy the constraints. Real-time drag minimization control is considered to be an important consideration for PAAW technology. Drag minimization control has many technical challenges such as sensing and control. An initial outline of a real-time drag minimization control has already been developed and will be further investigated in the future. A simulation study of a multi-objective flight control for a flight path angle command with aeroelastic mode suppression and drag minimization demonstrates the effectiveness of the proposed solution. In-flight structural loads are also an important consideration. As wing flexibility increases, maneuver load and gust load responses can be significant and therefore can pose safety and flight control concerns. In this paper, we will extend the multi-objective flight control framework to include load alleviation control. The study will focus initially on maneuver load minimization control, and then subsequently will address gust load alleviation control in future work.

  11. Null Angular Momentum and Weak KAM Solutions of the Newtonian N-Body Problem

    NASA Astrophysics Data System (ADS)

    Percino-Figueroa, Boris A.

    2017-08-01

    In [Arch. Ration. Mech. Anal. 213 (2014), 981-991] it has been proved that in the Newtonian N-body problem, given a minimal central configuration a and an arbitrary configuration x, there exists a completely parabolic orbit starting on x and asymptotic to the homothetic parabolic motion of a, furthermore such an orbit is a free time minimizer of the action functional. In this article we extend this result in abundance of completely parabolic motions by proving that under the same hypothesis it is possible to get that the completely parabolic motion starting at x has zero angular momentum. We achieve this by characterizing the rotation invariant weak KAM solutions as those defining a lamination on the configuration space by free time minimizers with zero angular momentum.

  12. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  13. Finite element procedures for time-dependent convection-diffusion-reaction systems

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Park, Y. J.; Deans, H. A.

    1988-01-01

    New finite element procedures based on the streamline-upwind/Petrov-Galerkin formulations are developed for time-dependent convection-diffusion-reaction equations. These procedures minimize spurious oscillations for convection-dominated and reaction-dominated problems. The results obtained for representative numerical examples are accurate with minimal oscillations. As a special application problem, the single-well chemical tracer test (a procedure for measuring oil remaining in a depleted field) is simulated numerically. The results show the importance of temperature effects on the interpreted value of residual oil saturation from such tests.

  14. Structural synthesis: Precursor and catalyst

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.

    1984-01-01

    More than twenty five years have elapsed since it was recognized that a rather general class of structural design optimization tasks could be properly posed as an inequality constrained minimization problem. It is suggested that, independent of primary discipline area, it will be useful to think about: (1) posing design problems in terms of an objective function and inequality constraints; (2) generating design oriented approximate analysis methods (giving special attention to behavior sensitivity analysis); (3) distinguishing between decisions that lead to an analysis model and those that lead to a design model; (4) finding ways to generate a sequence of approximate design optimization problems that capture the essential characteristics of the primary problem, while still having an explicit algebraic form that is matched to one or more of the established optimization algorithms; (5) examining the potential of optimum design sensitivity analysis to facilitate quantitative trade-off studies as well as participation in multilevel design activities. It should be kept in mind that multilevel methods are inherently well suited to a parallel mode of operation in computer terms or to a division of labor between task groups in organizational terms. Based on structural experience with multilevel methods general guidelines are suggested.

  15. Compiling quantum circuits to realistic hardware architectures using temporal planners

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  16. Social Return On Investment (SROI): Problems, solutions … and is SROI a good investment?

    PubMed

    Yates, Brian T; Marra, Mita

    2017-10-01

    The conclusion of this special issue on Social Return On Investment (SROI) begins with a summary of both advantages and problems of SROI, many of which were identified in preceding articles. We also offer potential solutions for some of these problems that can be derived from standard evaluation practices and that are becoming expected in SROIs that follow guidances from international SROI networks. A remaining concern about SROI is that we do not yet know if SROI itself adds sufficient benefit to programs to justify its cost. Two frameworks for this proposed metaevaluation of SROI are suggested, the first comparing benefits to costs summatively (the resource→outcome model). The second framework evaluates costs and benefits according to how much they contribute to or are caused by the different activities of SROI. This resource→activity→outcome model could enable outcomes of SROI to be maximized within resource constraints (such as budget and time limits) on SROI. Alternatively, information from this model could help minimize the costs of achieving a specific level of return on investment from conducting SROI. Possible problems with this metaevaluation of SROI are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A Parallel Approach To Optimum Actuator Selection With a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2000-01-01

    Recent discoveries in smart technologies have created a variety of aerodynamic actuators which have great potential to enable entirely new approaches to aerospace vehicle flight control. For a revolutionary concept such as a seamless aircraft with no moving control surfaces, there is a large set of candidate locations for placing actuators, resulting in a substantially larger number of combinations to examine in order to find an optimum placement satisfying the mission requirements. The placement of actuators on a wing determines the control effectiveness of the airplane. One approach to placement Maximizes the moments about the pitch, roll, and yaw axes, while minimizing the coupling. Genetic algorithms have been instrumental in achieving good solutions to discrete optimization problems, such as the actuator placement problem. As a proof of concept, a genetic has been developed to find the minimum number of actuators required to provide uncoupled pitch, roll, and yaw control for a simplified, untapered, unswept wing model. To find the optimum placement by searching all possible combinations would require 1,100 hours. Formulating the problem and as a multi-objective problem and modifying it to take advantage of the parallel processing capabilities of a multi-processor computer, reduces the optimization time to 22 hours.

  18. The Unintended Consequences of Social Media in Healthcare: New Problems and New Solutions.

    PubMed

    Hors-Fraile, S; Atique, S; Mayer, M A; Denecke, K; Merolli, M; Househ, M

    2016-11-10

    Social media is increasingly being used in conjunction with health information technology (health IT). The objective of this paper is to identify some of the undesirable outcomes that arise from this integration and to suggest solutions to these problems. After a discussion with experts to elicit the topics that should be included in the survey, we performed a narrative review based on recent literature and interviewed multidisciplinary experts from different areas. In each case, we identified and analyzed the unintended effects of social media in health IT. Each analyzed topic provided a different set of unintended consequences. Most relevant consequences include lack of privacy with ethical and legal issues, patient confusion in disease management, poor information accuracy in crowdsourcing, unclear responsibilities, misleading and biased information in the prevention and detection of epidemics, and demotivation in gamified health solutions with social components. Using social media in healthcare offers several benefits, but it is not exempt of potential problems, and not all of these problems have clear solutions. We recommend careful design of digital systems in order to minimize patient's feelings of demotivation and frustration and we recommend following specific guidelines that should be created by all stakeholders in the healthcare ecosystem.

  19. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  20. Convergence Speed of a Dynamical System for Sparse Recovery

    NASA Astrophysics Data System (ADS)

    Balavoine, Aurele; Rozell, Christopher J.; Romberg, Justin

    2013-09-01

    This paper studies the convergence rate of a continuous-time dynamical system for L1-minimization, known as the Locally Competitive Algorithm (LCA). Solving L1-minimization} problems efficiently and rapidly is of great interest to the signal processing community, as these programs have been shown to recover sparse solutions to underdetermined systems of linear equations and come with strong performance guarantees. The LCA under study differs from the typical L1 solver in that it operates in continuous time: instead of being specified by discrete iterations, it evolves according to a system of nonlinear ordinary differential equations. The LCA is constructed from simple components, giving it the potential to be implemented as a large-scale analog circuit. The goal of this paper is to give guarantees on the convergence time of the LCA system. To do so, we analyze how the LCA evolves as it is recovering a sparse signal from underdetermined measurements. We show that under appropriate conditions on the measurement matrix and the problem parameters, the path the LCA follows can be described as a sequence of linear differential equations, each with a small number of active variables. This allows us to relate the convergence time of the system to the restricted isometry constant of the matrix. Interesting parallels to sparse-recovery digital solvers emerge from this study. Our analysis covers both the noisy and noiseless settings and is supported by simulation results.

  1. Large bowel obstruction due to gallstones: an endoscopic problem?

    PubMed Central

    Waterland, Peter; Khan, Faisal Shehzaad; Durkin, Damien

    2014-01-01

    A 73-year-old man was admitted with symptoms of large bowel obstruction. An emergency CT scan revealed pneumobilia and large bowel obstruction at the level of the rectosigmoid due to a 4×4 cm impacted gallstone. Flexible sigmoidoscopy confirmed the diagnosis but initial attempts to drag the stone into the rectum failed. An endoscopic mechanical lithotripter was employed to repeatedly fracture the gallstone into smaller fragments, which were passed spontaneously the next day. The patient made a complete recovery avoiding the potential dangers of surgery. This case report discusses cholecystoenteric fistula and a novel minimally invasive treatment for large bowel obstruction due to gallstones. PMID:24390966

  2. Large bowel obstruction due to gallstones: an endoscopic problem?

    PubMed

    Waterland, Peter; Khan, Faisal Shehzaad; Durkin, Damien

    2014-01-03

    A 73-year-old man was admitted with symptoms of large bowel obstruction. An emergency CT scan revealed pneumobilia and large bowel obstruction at the level of the rectosigmoid due to a 4×4 cm impacted gallstone. Flexible sigmoidoscopy confirmed the diagnosis but initial attempts to drag the stone into the rectum failed. An endoscopic mechanical lithotripter was employed to repeatedly fracture the gallstone into smaller fragments, which were passed spontaneously the next day. The patient made a complete recovery avoiding the potential dangers of surgery. This case report discusses cholecystoenteric fistula and a novel minimally invasive treatment for large bowel obstruction due to gallstones.

  3. Deflation of the cosmological constant associated with inflation and dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi, E-mail: geng@phys.nthu.edu.tw, E-mail: chungchi@mx.nthu.edu.tw

    2016-06-01

    In order to solve the fine-tuning problem of the cosmological constant, we propose a simple model with the vacuum energy non-minimally coupled to the inflaton field. In this model, the vacuum energy decays to the inflaton during pre-inflation and inflation eras, so that the cosmological constant effectively deflates from the Planck mass scale to a much smaller one after inflation and plays the role of dark energy in the late-time of the universe. We show that our deflationary scenario is applicable to arbitrary slow-roll inflation models. We also take two specific inflation potentials to illustrate our results.

  4. Transformation of the nitrogen cycle: recent trends, questions, and potential solutions.

    PubMed

    Galloway, James N; Townsend, Alan R; Erisman, Jan Willem; Bekunda, Mateete; Cai, Zucong; Freney, John R; Martinelli, Luiz A; Seitzinger, Sybil P; Sutton, Mark A

    2008-05-16

    Humans continue to transform the global nitrogen cycle at a record pace, reflecting an increased combustion of fossil fuels, growing demand for nitrogen in agriculture and industry, and pervasive inefficiencies in its use. Much anthropogenic nitrogen is lost to air, water, and land to cause a cascade of environmental and human health problems. Simultaneously, food production in some parts of the world is nitrogen-deficient, highlighting inequities in the distribution of nitrogen-containing fertilizers. Optimizing the need for a key human resource while minimizing its negative consequences requires an integrated interdisciplinary approach and the development of strategies to decrease nitrogen-containing waste.

  5. Half-and-Half Palatoplasty

    PubMed Central

    Han, Hyun Ho; Kang, In Sook

    2014-01-01

    A 14-month-old child was diagnosed with a Veau Class II cleft palate. Von Langenbeck palatoplasty was performed for the right palate, and V-Y pushback palatoplasty was performed for the left palate. The child did not have a special problem during the surgery, and the authors were able to elongate the cleft by 10 mm. Contrary to preoperative concerns regarding the hybrid use of palatoplasties, the uvula and midline incisions remained balanced in the middle. The authors named this combination method "half-and-half palatoplasty" and plan to conduct a long-term follow up study as a potential solution that minimizes the complications of palatoplasty. PMID:28913201

  6. Non-intubated video-assisted thoracoscopic lung resections: the future of thoracic surgery?

    PubMed

    Gonzalez-Rivas, Diego; Bonome, Cesar; Fieira, Eva; Aymerich, Humberto; Fernandez, Ricardo; Delgado, Maria; Mendez, Lucia; de la Torre, Mercedes

    2016-03-01

    Thanks to the experience gained through the improvement of video-assisted thoracoscopic surgery (VATS) technique, and the enhancement of surgical instruments and high-definition cameras, most pulmonary resections can now be performed by minimally invasive surgery. The future of the thoracic surgery should be associated with a combination of surgical and anaesthetic evolution and improvements to reduce the trauma to the patient. Traditionally, intubated general anaesthesia with one-lung ventilation was considered necessary for thoracoscopic major pulmonary resections. However, thanks to the advances in minimally invasive techniques, the non-intubated thoracoscopic approach has been adapted even for use with major lung resections. An adequate analgesia obtained from regional anaesthesia techniques allows VATS to be performed in sedated patients and the potential adverse effects related to general anaesthesia and selective ventilation can be avoided. The non-intubated procedures try to minimize the adverse effects of tracheal intubation and general anaesthesia, such as intubation-related airway trauma, ventilation-induced lung injury, residual neuromuscular blockade, and postoperative nausea and vomiting. Anaesthesiologists should be acquainted with the procedure to be performed. Furthermore, patients may also benefit from the efficient contraction of the dependent hemidiaphragm and preserved hypoxic pulmonary vasoconstriction during surgically induced pneumothorax in spontaneous ventilation. However, the surgical team must be aware of the potential problems and have the judgement to convert regional anaesthesia to intubated general anaesthesia in enforced circumstances. The non-intubated anaesthesia combined with the uniportal approach represents another step forward in the minimally invasive strategies of treatment, and can be reliably offered in the near future to an increasing number of patients. Therefore, educating and training programmes in VATS with non-intubated patients may be needed. Surgical techniques and various regional anaesthesia techniques as well as indications, contraindications, criteria to conversion of sedation to general anaesthesia in non-intubated patients are reviewed and discussed. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  7. Local Risk-Minimization for Defaultable Claims with Recovery Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  8. Assessment of the potential of halophytes as energy crops for the electric utility industry. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodin, J.R.

    1984-09-01

    This technical report assesses and estimates the potential of selected halophytes as future renewable energy resources, especially by US electric utilities, and familiarizes nonspecialists with research and development problems that must be resolved before these energy sources can become dependable supplies of energy. A literature search related to both indigenous and exotic species of halophytes has been done and appropriate terrestrial species have been selected. Selection criteria include: total biomass potential, genetic constraints, establishment and cultivation requirements, regions of suitability, secondary credits, and a number of other factors. Based on these selection criteria, for the arid western states with highmore » levels of salinity in water and/or soils, there is little potential for energy feedstocks derived from grasses and herbaceous forbs. Likewise, coastal marshes, estuaries, and mangrove swamps, although excellent biomass producers, are too limited by region and have too many ecological and environmental problems for consideration. The deep-rooted, perennial woody shrubs indigenous to many saline regions of the west provide the best potential. The number of species in this group is limited, and Atriplex canescens, Sarcobatus vermiculatus, and Chrysothamnus nauseosus are the three species with the greatest biological potential. These shrubs would receive minimal energy inputs in cultivation, would not compete with agricultural land, and would restore productivity to severely disturbed sites. One might logically expect to achieve biomass feedstock yields of three to five tons/acre/yr on a long-term sustainable basis. The possibility also exists that exotic species might be introduced. 67 references, 1 figure, 5 tables.« less

  9. Building large mosaics of confocal edomicroscopic images using visual servoing.

    PubMed

    Rosa, Benoît; Erden, Mustafa Suphi; Vercauteren, Tom; Herman, Benoît; Szewczyk, Jérôme; Morel, Guillaume

    2013-04-01

    Probe-based confocal laser endomicroscopy provides real-time microscopic images of tissues contacted by a small probe that can be inserted in vivo through a minimally invasive access. Mosaicking consists in sweeping the probe in contact with a tissue to be imaged while collecting the video stream, and process the images to assemble them in a large mosaic. While most of the literature in this field has focused on image processing, little attention has been paid so far to the way the probe motion can be controlled. This is a crucial issue since the precision of the probe trajectory control drastically influences the quality of the final mosaic. Robotically controlled motion has the potential of providing enough precision to perform mosaicking. In this paper, we emphasize the difficulties of implementing such an approach. First, probe-tissue contacts generate deformations that prevent from properly controlling the image trajectory. Second, in the context of minimally invasive procedures targeted by our research, robotic devices are likely to exhibit limited quality of the distal probe motion control at the microscopic scale. To cope with these problems visual servoing from real-time endomicroscopic images is proposed in this paper. It is implemented on two different devices (a high-accuracy industrial robot and a prototype minimally invasive device). Experiments on different kinds of environments (printed paper and ex vivo tissues) show that the quality of the visually servoed probe motion is sufficient to build mosaics with minimal distortion in spite of disturbances.

  10. Formula of an ideal carbon nanomaterial supercapacitor

    NASA Astrophysics Data System (ADS)

    Samuilova, Larissa; Frenkel, Alexander; Samuilov, Vladimir

    2014-03-01

    Supercapacitors exhibit great potential as high-performance energy sources for a large variety of potential applications, ranging from consumer electronics through wearable optoelectronics to hybrid electric vehicles. We focuse on carbon nanomaterials, especially carbon nanotube films, 3-D graphene, graphene oxide due to their high specific surface area, excellent electrical and mechanical properties. We have developed a simple approach to lower the equivalent series resistance by fabricating electrodes of arbitrary thickness using carbon nanotube films and reduced graphene oxide based composites. Besides of the problem of increasing of the capacitance, the minimization of the loss tangent (dissipation factor) is marginal for the future development of the supercapacitors. This means, not only a very well developed surface area of the electrodes, but the role of the good quality of the porous separator and the electrolyte are important. We address these factors as well.

  11. Perspectives of Disciplinary Problems and Practices in Elementary Schools

    ERIC Educational Resources Information Center

    Huger Marsh, Darlene P.

    2012-01-01

    Ill-discipline in public schools predates compulsory education in the United States. Disciplinary policies and laws enacted to combat the problem have met with minimal success. Research and recommendations have generally focused on the indiscipline problems ubiquitous in intermediate, junior and senior high schools. However, similar misbehaviors…

  12. Minimalism as a Guiding Principle: Linking Mathematical Learning to Everyday Knowledge

    ERIC Educational Resources Information Center

    Inoue, Noriyuki

    2008-01-01

    Studies report that students often fail to consider familiar aspects of reality in solving mathematical word problems. This study explored how different features of mathematical problems influence the way that undergraduate students employ realistic considerations in mathematical problem solving. Incorporating familiar contents in the word…

  13. Utilization of biocatalysts in cellulose waste minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodward, J.; Evans, B.R.

    1996-09-01

    Cellulose, a polymer of glucose, is the principal component of biomass and, therefore, a major source of waste that is either buried or burned. Examples of biomass waste include agricultural crop residues, forestry products, and municipal wastes. Recycling of this waste is important for energy conservation as well as waste minimization and there is some probability that in the future biomass could become a major energy source and replace fossil fuels that are currently used for fuels and chemicals production. It has been estimated that in the United States, between 100-450 million dry tons of agricultural waste are produced annually,more » approximately 6 million dry tons of animal waste, and of the 190 million tons of municipal solid waste (MSW) generated annually, approximately two-thirds is cellulosic in nature and over one-third is paper waste. Interestingly, more than 70% of MSW is landfilled or burned, however landfill space is becoming increasingly scarce. On a smaller scale, important cellulosic products such as cellulose acetate also present waste problems; an estimated 43 thousand tons of cellulose ester waste are generated annually in the United States. Biocatalysts could be used in cellulose waste minimization and this chapter describes their characteristics and potential in bioconversion and bioremediation processes.« less

  14. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  15. Poverty-Exploitation-Alienation.

    ERIC Educational Resources Information Center

    Bronfenbrenner, Martin

    1980-01-01

    Illustrates how knowledge derived from the discipline of economics can be used to help shed light on social problems such as poverty, exploitation, and alienation, and can help decision makers form policy to minimize these and similar problems. (DB)

  16. Complexity Management Using Metrics for Trajectory Flexibility Preservation and Constraint Minimization

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Shen, Ni; Wing, David J.

    2011-01-01

    The growing demand for air travel is increasing the need for mitigating air traffic congestion and complexity problems, which are already at high levels. At the same time new surveillance, navigation, and communication technologies are enabling major transformations in the air traffic management system, including net-based information sharing and collaboration, performance-based access to airspace resources, and trajectory-based rather than clearance-based operations. The new system will feature different schemes for allocating tasks and responsibilities between the ground and airborne agents and between the human and automation, with potential capacity and cost benefits. Therefore, complexity management requires new metrics and methods that can support these new schemes. This paper presents metrics and methods for preserving trajectory flexibility that have been proposed to support a trajectory-based approach for complexity management by airborne or ground-based systems. It presents extensions to these metrics as well as to the initial research conducted to investigate the hypothesis that using these metrics to guide user and service provider actions will naturally mitigate traffic complexity. The analysis showed promising results in that: (1) Trajectory flexibility preservation mitigated traffic complexity as indicated by inducing self-organization in the traffic patterns and lowering traffic complexity indicators such as dynamic density and traffic entropy. (2)Trajectory flexibility preservation reduced the potential for secondary conflicts in separation assurance. (3) Trajectory flexibility metrics showed potential application to support user and service provider negotiations for minimizing the constraints imposed on trajectories without jeopardizing their objectives.

  17. Minimal Interventions in the Teaching of Mathematics

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    This paper addresses ways in which mathematics pedagogy can benefit from insights gleaned from counselling. Person-centred counselling stresses the value of genuineness, warm empathetic listening and minimal intervention to support people in solving their own problems and developing increased autonomy. Such an approach contrasts starkly with the…

  18. Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution

    PubMed Central

    Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.

    2013-01-01

    With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980

  19. The maximum work principle regarded as a consequence of an optimization problem based on mechanical virtual power principle and application of constructal theory

    NASA Astrophysics Data System (ADS)

    Gavrus, Adinel

    2017-10-01

    This scientific paper proposes to prove that the maximum work principle used by theory of continuum media plasticity can be regarded as a consequence of an optimization problem based on constructal theory (prof. Adrian BEJAN). It is known that the thermodynamics define the conservation of energy and the irreversibility of natural systems evolution. From mechanical point of view the first one permits to define the momentum balance equation, respectively the virtual power principle while the second one explains the tendency of all currents to flow from high to low values. According to the constructal law all finite-size system searches to evolve in such configurations that flow more and more easily over time distributing the imperfections in order to maximize entropy and to minimize the losses or dissipations. During a material forming process the application of constructal theory principles leads to the conclusion that under external loads the material flow is that which all dissipated mechanical power (deformation and friction) become minimal. On a mechanical point of view it is then possible to formulate the real state of all mechanical variables (stress, strain, strain rate) as those that minimize the total dissipated power. So between all other virtual non-equilibrium states, the real state minimizes the total dissipated power. It can be then obtained a variational minimization problem and this paper proof in a mathematical sense that starting from this formulation can be finding in a more general form the maximum work principle together with an equivalent form for the friction term. An application in the case of a plane compression of a plastic material shows the feasibility of the proposed minimization problem formulation to find analytical solution for both cases: one without friction influence and a second which take into account Tresca friction law. To valid the proposed formulation, a comparison with a classical analytical analysis based on slices, upper/lower bound methods and numerical Finite Element simulation is also presented.

  20. On the Minimal Length Uncertainty Relation and the Foundations of String Theory

    DOE PAGES

    Chang, Lay Nam; Lewis, Zachary; Minic, Djordje; ...

    2011-01-01

    We review our work on the minimal length uncertainty relation as suggested by perturbative string theory. We discuss simple phenomenological implications of the minimal length uncertainty relation and then argue that the combination of the principles of quantum theory and general relativity allow for a dynamical energy-momentum space. We discuss the implication of this for the problem of vacuum energy and the foundations of nonperturbative string theory.

  1. Minimally invasive lumbar foraminotomy.

    PubMed

    Deutsch, Harel

    2013-07-01

    Lumbar radiculopathy is a common problem. Nerve root compression can occur at different places along a nerve root's course including in the foramina. Minimal invasive approaches allow easier exposure of the lateral foramina and decompression of the nerve root in the foramina. This video demonstrates a minimally invasive approach to decompress the lumbar nerve root in the foramina with a lateral to medial decompression. The video can be found here: http://youtu.be/jqa61HSpzIA.

  2. Corporate image and public health: an analysis of the Philip Morris, Kraft, and Nestlé websites.

    PubMed

    Smith, Elizabeth

    2012-01-01

    Companies need to maintain a good reputation to do business; however, companies in the infant formula, tobacco, and processed food industries have been identified as promoting disease. Such companies use their websites as a means of promulgating a positive public image, thereby potentially reducing the effectiveness of public health campaigns against the problems they perpetuate. The author examined documents from the websites of Philip Morris, Kraft, and Nestlé for issue framing and analyzed them using Benoit's typology of corporate image repair strategies. All three companies defined the problems they were addressing strategically, minimizing their own responsibility and the consequences of their actions. They proposed solutions that were actions to be taken by others. They also associated themselves with public health organizations. Health advocates should recognize industry attempts to use relationships with health organizations as strategic image repair and reject industry efforts to position themselves as stakeholders in public health problems. Denormalizing industries that are disease vectors, not just their products, may be critical in realizing positive change.

  3. Corporate Image and Public Health: An Analysis of the Philip Morris, Kraft, and Nestlé Websites

    PubMed Central

    SMITH, ELIZABETH

    2012-01-01

    Companies need to maintain a good reputation to do business; however, companies in the infant formula, tobacco, and processed food industries have been identified as promoting disease. Such companies use their websites as a means of promulgating a positive public image, thereby potentially reducing the effectiveness of public health campaigns against the problems they perpetuate. The author examined documents from the websites of Philip Morris, Kraft, and Nestlé for issue framing and analyzed them using Benoit’s typology of corporate image repair strategies. All three companies defined the problems they were addressing strategically, minimizing their own responsibility and the consequences of their actions. They proposed solutions that were actions to be taken by others. They also associated themselves with public health organizations. Health advocates should recognize industry attempts to use relationships with health organizations as strategic image repair and reject industry efforts to position themselves as stakeholders in public health problems. Denormalizing industries that are disease vectors, not just their products, may be critical in realizing positive change. PMID:22420639

  4. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  5. Responsive consumerism: empowerment in markets for health plans.

    PubMed

    Elbel, Brian; Schlesinger, Mark

    2009-09-01

    American health policy is increasingly relying on consumerism to improve its performance. This article examines a neglected aspect of medical consumerism: the extent to which consumers respond to problems with their health plans. Using a telephone survey of five thousand consumers conducted in 2002, this article assesses how frequently consumers voice formal grievances or exit from their health plan in response to problems of differing severity. This article also examines the potential impact of this responsiveness on both individuals and the market. In addition, using cross-group comparisons of means and regressions, it looks at how the responses of "empowered" consumers compared with those who are "less empowered." The vast majority of consumers do not formally voice their complaints or exit health plans, even in response to problems with significant consequences. "Empowered" consumers are only minimally more likely to formally voice and no more likely to leave their plan. Moreover, given the greater prevalence of trivial problems, consumers are much more likely to complain or leave their plans because of problems that are not severe. Greater empowerment does not alleviate this. While much of the attention on consumerism has focused on prospective choice, understanding how consumers respond to problems is equally, if not more, important. Relying on consumers' responses as a means to protect individual consumers or influence the market for health plans is unlikely to be successful in its current form.

  6. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    NASA Astrophysics Data System (ADS)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  7. Preformulation considerations for controlled release dosage forms. Part III. Candidate form selection using numerical weighting and scoring.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Two numerical methods, Decision Analysis (DA) and Potential Problem Analysis (PPA) are presented as alternative selection methods to the logical method presented in Part I. In DA properties are weighted and outcomes are scored. The weighted scores for each candidate are totaled and final selection is based on the totals. Higher scores indicate better candidates. In PPA potential problems are assigned a seriousness factor and test outcomes are used to define the probability of occurrence. The seriousness-probability products are totaled and forms with minimal scores are preferred. DA and PPA have never been compared to the logical-elimination method. Additional data were available for two forms of McN-5707 to provide complete preformulation data for five candidate forms. Weight and seriousness factors (independent variables) were obtained from a survey of experienced formulators. Scores and probabilities (dependent variables) were provided independently by Preformulation. The rankings of the five candidate forms, best to worst, were similar for all three methods. These results validate the applicability of DA and PPA for candidate form selection. DA and PPA are particularly applicable in cases where there are many candidate forms and where each form has some degree of unfavorable properties.

  8. Left-frontal brain potentials index conceptual implicit memory for words initially viewed subliminally.

    PubMed

    Chen, Jason C W; Li, Wen; Lui, Ming; Paller, Ken A

    2009-08-18

    Neural correlates of explicit and implicit memory tend to co-occur and are therefore difficult to measure independently, posing problems for understanding the unique nature of different types of memory processing. To circumvent this problem, we developed an experimental design wherein subjects acquired information from words presented in a subliminal manner, such that conscious remembering was minimized. Cross-modal word repetition was used so that perceptual implicit memory would also be limited. Healthy human subjects viewed subliminal words six times each and about 2 min later heard the same words interspersed with new words in a category-verification test. Electrophysiological correlates of word repetition included negative brain potentials over left-frontal locations beginning approximately 500 ms after word onset. Behavioral responses were slower for repeated words than for new words. Differential processing of word meaning in the absence of explicit memory was most likely responsible for differential electrical and behavioral responses to old versus new words. Moreover, these effects were distinct from neural correlates of explicit memory observed in prior experiments, and were observed here in two separate experiments, thus providing a foundation for further investigations of relationships and interactions between different types of memory engaged when words repeat.

  9. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  10. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    ERIC Educational Resources Information Center

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  11. Holographic Entanglement Entropy, SUSY & Calibrations

    NASA Astrophysics Data System (ADS)

    Colgáin, Eoin Ó.

    2018-01-01

    Holographic calculations of entanglement entropy boil down to identifying minimal surfaces in curved spacetimes. This generically entails solving second-order equations. For higher-dimensional AdS geometries, we demonstrate that supersymmetry and calibrations reduce the problem to first-order equations. We note that minimal surfaces corresponding to disks preserve supersymmetry, whereas strips do not.

  12. Technological Minimalism: A Cost-Effective Alternative for Course Design and Development.

    ERIC Educational Resources Information Center

    Lorenzo, George

    2001-01-01

    Discusses the use of minimum levels of technology, or technological minimalism, for Web-based multimedia course content. Highlights include cost effectiveness; problems with video streaming, the use of XML for Web pages, and Flash and Java applets; listservs instead of proprietary software; and proper faculty training. (LRW)

  13. Safety in the Chemical Laboratory: Flood Control.

    ERIC Educational Resources Information Center

    Pollard, Bruce D.

    1983-01-01

    Describes events leading to a flood in the Wehr Chemistry Laboratory at Marquette University, discussing steps taken to minimize damage upon discovery. Analyzes the problem of flooding in the chemical laboratory and outlines seven steps of flood control: prevention; minimization; early detection; stopping the flood; evaluation; clean-up; and…

  14. Representations in Dynamical Embodied Agents: Re-Analyzing a Minimally Cognitive Model Agent

    ERIC Educational Resources Information Center

    Mirolli, Marco

    2012-01-01

    Understanding the role of "representations" in cognitive science is a fundamental problem facing the emerging framework of embodied, situated, dynamical cognition. To make progress, I follow the approach proposed by an influential representational skeptic, Randall Beer: building artificial agents capable of minimally cognitive behaviors and…

  15. Storage Optimization of Educational System Data

    ERIC Educational Resources Information Center

    Boja, Catalin

    2006-01-01

    There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…

  16. Comprehensive Engineering Approach to Achieving Safe Neighborhoods.

    DOT National Transportation Integrated Search

    2000-09-01

    Steady increases in travel demand coupled with minimal increases in arterial street capacity have led to an increase in traffic-related safety problems in residential neighborhoods. These problems stem from the significant number of motorists that di...

  17. $L^1$ penalization of volumetric dose objectives in optimal control of PDEs

    DOE PAGES

    Barnard, Richard C.; Clason, Christian

    2017-02-11

    This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less

  18. Cooperative solution in the synthesis of multidegree-of-freedom shock isolation systems

    NASA Astrophysics Data System (ADS)

    Hati, S. K.; Rao, S. S.

    1983-01-01

    It is noted that there are essentially two major criteria in the synthesis of shock isolation stems. One is related to the minimization of the relative displacement between the main mass (which is to be isolated from vibration) and the base (where disturbance is given); the other concerns the minimization of force transmitted to the main mass. From the available literature, it is observed that nearly all the investigators have considered the design problem by treating one of these factors as the objective and the other as a constraint. This problem is treated here as a multicriteria optimization problem, and the trade-off between the two objectives is determined by using a game theory approach. The synthesis of a multidegree-of-freedom shock isolation system under a sinusoidal base disturbance is given as an example problem to illustrate the theory.

  19. Inverse atmospheric radiative transfer problems - A nonlinear minimization search method of solution. [aerosol pollution monitoring

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1976-01-01

    The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.

  20. Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil

    1992-01-01

    This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.

  1. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  2. Water and wastewater minimization plan in food industries.

    PubMed

    Ganjidoust, H; Ayati, B

    2002-01-01

    Iran is one of the countries located in a dry and semi-dry area. Many provinces like Tehran are facing problems in recent years because of less precipitation. For reduction in wastewater treatment cost and water consumption, many research works have been carried out. One of them concerns food industries group, which consumes a great amount of water in different units. For example, in beverage industries, washing of glass bottles seven times requires large amounts of water but use of plastic bottles can reduce water consumption. Another problem is leakage from pipelines, valves, etc. Their repair plays an important role in the wastage of water. The non-polluted wasted water can be used in washing halls, watering green yards, recycling to the process or reusing in cooling towers. In this paper, after a short review of waste minimization plans in food industries, problems concerning water consuming and wastewater producing units in three Iranian food industries have been investigated. At the end, some suggestions have been given for implementing the water and wastewater minimization plan in the companies.

  3. A Retrospective Comparison of Conventional versus Transverse Mini-Incision Technique for Carpal Tunnel Release

    PubMed Central

    Gülşen, İsmail; Ak, Hakan; Evcılı, Gökhan; Balbaloglu, Özlem; Sösüncü, Enver

    2013-01-01

    Background. In this retrospective study, we aimed to compare the results of two surgical techniques, conventional and transverse mini-incision. Materials and Methods. 95 patients were operated between 2011 and 2012 in Bitlis State Hospital. 50 patients were operated with conventional technique and 45 of them were operated with minimal transverse incision. Postoperative complications, incision site problems, and the time of starting to use their hands in daily activities were noted. Results. 95 patients were included in the study. The mean age was 48. 87 of them were female and 8 were male. There was no problem of incision site in both of the two surgical techniques. Only in one patient, anesthesia developed in minimal incision technique. The time of starting to use their hands in daily activities was 22,2 days and 17 days in conventional and minimal incision technique, respectively. Conclusion. Two surgical techniques did not show superiority to each other in terms of postoperative complications and incision site problems except the time of starting to use their hands in daily activities. PMID:24396607

  4. Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.

    NASA Astrophysics Data System (ADS)

    Gader, P.

    2016-12-01

    The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for becoming exactly equal to the data points. The algorithm has been applied to spectral data and the differences between the endmembers resulting from ecorded. The results so far are that the endmembers found SPICED are approximately 25% closer to the data with indistinguishable reconstruction error compared to those found using SPICE.

  5. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  6. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  7. Influence maximization in complex networks through optimal percolation.

    PubMed

    Morone, Flaviano; Makse, Hernán A

    2015-08-06

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  8. Process-driven inference of biological network structure: feasibility, minimality, and multiplicity

    NASA Astrophysics Data System (ADS)

    Zeng, Chen

    2012-02-01

    For a given dynamic process, identifying the putative interaction networks to achieve it is the inference problem. In this talk, we address the computational complexity of inference problem in the context of Boolean networks under dominant inhibition condition. The first is a proof that the feasibility problem (is there a network that explains the dynamics?) can be solved in polynomial-time. Second, while the minimality problem (what is the smallest network that explains the dynamics?) is shown to be NP-hard, a simple polynomial-time heuristic is shown to produce near-minimal solutions, as demonstrated by simulation. Third, the theoretical framework also leads to a fast polynomial-time heuristic to estimate the number of network solutions with reasonable accuracy. We will apply these approaches to two simplified Boolean network models for the cell cycle process of budding yeast (Li 2004) and fission yeast (Davidich 2008). Our results demonstrate that each of these networks contains a giant backbone motif spanning all the network nodes that provides the desired main functionality, while the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. Moreover, we show that the bioprocesses of these two cell cycle models differ considerably from a typically generated process and are intrinsically cascade-like.

  9. The Controversial Classroom: Institutional Resources and Pedagogical Strategies for a Race Relations Course.

    ERIC Educational Resources Information Center

    Wahl, Ana-Maria; Perez, Eduardo T.; Deegan, Mary Jo; Sanchez, Thomas W.; Applegate, Cheryl

    2000-01-01

    Offers a model for a collective strategy that can be used to deal more effectively with problems associated with race relations courses. Presents a multidimensional analysis of the constraints that create problems for race relations instructors and highlights a multidimensional approach to minimizing these problems. Includes references. (CMK)

  10. Mechanism problems

    NASA Technical Reports Server (NTRS)

    Riedel, J. K.

    1972-01-01

    It is pointed out that too frequently during the design and development of mechanisms, problems occur that could have been avoided if the right question had been asked before, rather than after, the fact. Several typical problems, drawn from actual experience, are discussed and analyzed. The lessons learned are used to generate various suggestions for minimizing mistakes in mechanism design.

  11. Basics of Sterile Compounding: Manipulating Peptides and Proteins.

    PubMed

    Akers, Michael J

    2017-01-01

    Biopharmaceuticals contain primary and secondary structure, which offer few problems. It is the tertiary structure that causes problems, resulting in both physical and chemical stability issues. The thrust of this article is to share briefly what can be done to minimize these problems. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  12. Understanding persuasive attributes of sports betting advertisements: A conjoint analysis of selected elements.

    PubMed

    Hing, Nerilee; Vitartas, Peter; Lamont, Matthew

    2017-12-01

    Background and aims Despite recent growth in sports betting advertising, minimal research has examined the influence of different advertising message attributes on betting attitudes and behaviors. This study aimed to identify which attributes of sports betting advertisements most engage attention, interest, desire and likelihood of betting among non-problem, low-risk, moderate-risk, and problem gamblers. Methods A novel approach utilizing an experimental design incorporating conjoint analysis examined the effects of: three message formats (commentary, on-screen display, and studio crossover); four appeals (neutral, jovial, ease of placing the bet, and sense of urgency); three types of presenters (match presenter, sports betting operator, and attractive non-expert female presenter); and four bet types (traditional, exotic key event, risk-free, and micro-bet). A professional film company using paid actors produced 20 mock television advertisements simulating typical gambling messages based on the conjoint approach. These were embedded into an online survey of 611 Australian adults. Results The most attention-grabbing attributes were type of presenter and type of bet. The attractive non-expert female presenter gained more attention from all gambler groups than other presenters. The type of bet was most persuasive in converting attention into likely betting among all gambler groups, with the risk-free bet being much more persuasive than other bet types. Problem gamblers were distinct by their greater attraction to in-play micro-bets. Discussion and conclusion Given the potential for incentivized bets offering financial inducements and for in-play micro-bets to undermine harm minimization and consumer protection, regulators and wagering operators should reconsider whether these bet types are consistent with their responsible gambling objectives.

  13. New Polyazine-Bridged RuII,RhIII and RuII,RhI Supramolecular Photocatalysts for Water Reduction to Hydrogen Applicable for Solar Energy Conversion and Mechanistic Investigation of the Photocatalytic Cycle

    NASA Astrophysics Data System (ADS)

    Zhou, Rongwei

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  14. Targeted Pressure Management During CO 2 Sequestration: Optimization of Well Placement and Brine Extraction

    DOE PAGES

    Cihan, Abdullah; Birkholzer, Jens; Bianchi, Marco

    2014-12-31

    Large-scale pressure increases resulting from carbon dioxide (CO 2) injection in the subsurface can potentially impact caprock integrity, induce reactivation of critically stressed faults, and drive CO 2 or brine through conductive features into shallow groundwater. Pressure management involving the extraction of native fluids from storage formations can be used to minimize pressure increases while maximizing CO2 storage. However, brine extraction requires pumping, transportation, possibly treatment, and disposal of substantial volumes of extracted brackish or saline water, all of which can be technically challenging and expensive. This paper describes a constrained differential evolution (CDE) algorithm for optimal well placement andmore » injection/ extraction control with the goal of minimizing brine extraction while achieving predefined pressure contraints. The CDE methodology was tested for a simple optimization problem whose solution can be partially obtained with a gradient-based optimization methodology. The CDE successfully estimated the true global optimum for both extraction well location and extraction rate, needed for the test problem. A more complex example application of the developed strategy was also presented for a hypothetical CO 2 storage scenario in a heterogeneous reservoir consisting of a critically stressed fault nearby an injection zone. Through the CDE optimization algorithm coupled to a numerical vertically-averaged reservoir model, we successfully estimated optimal rates and locations for CO 2 injection and brine extraction wells while simultaneously satisfying multiple pressure buildup constraints to avoid fault activation and caprock fracturing. The study shows that the CDE methodology is a very promising tool to solve also other optimization problems related to GCS, such as reducing ‘Area of Review’, monitoring design, reducing risk of leakage and increasing storage capacity and trapping.« less

  15. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  16. Perioperative hair removal: A review of best practice and a practice improvement opportunity.

    PubMed

    Spencer, Maureen; Barnden, Marsha; Johnson, Helen Boehm; Fauerbach, Loretta Litz; Graham, Denise; Edmiston, Charles E

    2018-06-01

    The current practice of perioperative hair removal reflects research-driven changes designed to minimize the risk of surgical wound infection. An aspect of the practice which has received less scrutiny is the clean-up of the clipped hair. This process is critical. The loose fibers represent a potential infection risk because of the micro-organisms they can carry, but their clean-up can pose a logistical problem because of the time required to remove them. Research has demonstrated that the most commonly employed means of clean-up, the use of adhesive tape or sticky mitts, can be both ineffective and time-consuming in addition to posing an infection risk from cross-contamination. Recently published research evaluating surgical clippers fitted with a vacuum-assisted hair collection device highlights the potential for significant practice improvement in the perioperative hair removal clean-up process. These improvements include not only further mitigation of potential infection risk but also substantial OR time and cost savings.

  17. Potential problems of detecting and treating psychosis in the White House. Potential psychosis in the White House.

    PubMed

    Gambill, J

    1980-01-01

    Numerous books and articles have described the emotional difficulties suffered by President Nixon and how they influenced functioning in the White House and other branches of government during his presidency. I am not able to ascertain whether Nixon was temporarily psychotic; but the reported emotional turmoil suggests he may have been at high risk for committing suicide or developing a psychosis. This article analyzes the reactions of numerous people to the questionably irrational behaviour of Richard Nixon. Examples of psychiatric risks in other Presidents, presidential candidates, and public figures are also discussed. The potential difficulties in detecting and treating severe psychiatric illness in Presidents and other public figures should not prevent us from taking action now to minimize future risks. It is recommended that future Presidents appoint a psychiatrist, at least on a part-time basis, as one of their personal physicians in order to increase Presidential access to psychiatric evaluation and treatment.

  18. On the Support of Minimizers of Causal Variational Principles

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Schiefeneder, Daniela

    2013-11-01

    A class of causal variational principles on a compact manifold is introduced and analyzed both numerically and analytically. It is proved under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed and explicit analysis of the minimizers. On the sphere, we get a connection to packing problems and the Tammes distribution. Moreover, the minimal action is estimated from above and below.

  19. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  20. Setting objectives for managing Key deer

    USGS Publications Warehouse

    Diefenbach, Duane R.; Wagner, Tyler; Stauffer, Glenn E.

    2014-01-01

    The U.S. Fish and Wildlife Service (FWS) is responsible for the protection and management of Key deer (Odocoileus virginianus clavium) because the species is listed as Endangered under the Endangered Species Act (ESA). The purpose of the ESA is to protect and recover imperiled species and the ecosystems upon which they depend. There are a host of actions that could possibly be undertaken to recover the Key deer population, but without a clearly defined problem and stated objectives it can be difficult to compare and evaluate alternative actions. In addition, management goals and the acceptability of alternative management actions are inherently linked to stakeholders, who should be engaged throughout the process of developing a decision framework. The purpose of this project was to engage a representative group of stakeholders to develop a problem statement that captured the management problem the FWS must address with Key deer and identify objectives that, if met, would help solve the problem. In addition, the objectives were organized in a hierarchical manner (i.e., an objectives network) to show how they are linked, and measurable attributes were identified for each objective. We organized a group of people who represented stakeholders interested in and potentially affected by the management of Key deer. These stakeholders included individuals who represented local, state, and federal governments, non-governmental organizations, the general public, and local businesses. This stakeholder group met five full days over the course of an eight-week period to identify objectives that would address the following problem:“As recovery and removal from the Endangered Species list is the purpose of the Endangered Species Act, the U.S. Fish and Wildlife Service needs a management approach that will ensure a sustainable, viable, and healthy Key deer population. Urbanization has affected the behavior and population dynamics of the Key deer and the amount and characteristics of available habitat. The identified management approach must balance relevant social and economic concerns, Federal (e.g., Endangered Species Act, Wilderness Act, Refuge Act) and state regulations, and the conservation of biodiversity (e.g., Endangered/Threatened species, native habitat) in the Lower Keys.”The stakeholder group identified four fundamental objectives that are essential to addressing the problem: 1) Maximize a sustainable, viable, and healthy Key deer population, 2) Maximize value of Key deer to the People, 3) Minimize deer-related negative impacts to biodiversity, and 4) Minimize costs. In addition, the group identified 25 additional objectives that, if met, would help meet the fundamental objectives. The objectives network and measurable attributes identified by the stakeholder group can be used in the future to develop and evaluate potential management alternatives.

  1. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    PubMed

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    PubMed

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  3. Conformational Sampling of a Biomolecular Rugged Energy Landscape.

    PubMed

    Rydzewski, Jakub; Jakubowski, Rafal; Nicosia, Giuseppe; Nowak, Wieslaw

    2018-01-01

    The protein structure refinement using conformational sampling is important in hitherto protein studies. In this paper, we examined the protein structure refinement by means of potential energy minimization using immune computing as a method of sampling conformations. The method was tested on the x-ray structure and 30 decoys of the mutant of [Leu]Enkephalin, a paradigmatic example of the biomolecular multiple-minima problem. In order to score the refined conformations, we used a standard potential energy function with the OPLSAA force field. The effectiveness of the search was assessed using a variety of methods. The robustness of sampling was checked by the energy yield function which measures quantitatively the number of the peptide decoys residing in an energetic funnel. Furthermore, the potential energy-dependent Pareto fronts were calculated to elucidate dissimilarities between peptide conformations and the native state as observed by x-ray crystallography. Our results showed that the probed potential energy landscape of [Leu]Enkephalin is self-similar on different metric scales and that the local potential energy minima of the peptide decoys are metastable, thus they can be refined to conformations whose potential energy is decreased by approximately 250 kJ/mol.

  4. Complexes of a Zn-metalloenzyme binding site with hydroxamate-containing ligands. A case for detailed benchmarkings of polarizable molecular mechanics/dynamics potentials when the experimental binding structure is unknown.

    PubMed

    Gresh, Nohad; Perahia, David; de Courcy, Benoit; Foret, Johanna; Roux, Céline; El-Khoury, Lea; Piquemal, Jean-Philip; Salmon, Laurent

    2016-12-15

    Zn-metalloproteins are a major class of targets for drug design. They constitute a demanding testing ground for polarizable molecular mechanics/dynamics aimed at extending the realm of quantum chemistry (QC) to very long-duration molecular dynamics (MD). The reliability of such procedures needs to be demonstrated upon comparing the relative stabilities of competing candidate complexes of inhibitors with the recognition site stabilized in the course of MD. This could be necessary when no information is available regarding the experimental structure of the inhibitor-protein complex. Thus, this study bears on the phosphomannose isomerase (PMI) enzyme, considered as a potential therapeutic target for the treatment of several bacterial and parasitic diseases. We consider its complexes with 5-phospho-d-arabinonohydroxamate and three analog ligands differing by the number and location of their hydroxyl groups. We evaluate the energy accuracy expectable from a polarizable molecular mechanics procedure, SIBFA. This is done by comparisons with ab initio quantum-chemistry (QC) calculations in the following cases: (a) the complexes of the four ligands in three distinct structures extracted from the entire PMI-ligand energy-minimized structures, and totaling up to 264 atoms; (b) the solvation energies of several energy-minimized complexes of each ligand with a shell of 64 water molecules; (c) the conformational energy differences of each ligand in different conformations characterized in the course of energy-minimizations; and (d) the continuum solvation energies of the ligands in different conformations. The agreements with the QC results appear convincing. On these bases, we discuss the prospects of applying the procedure to ligand-macromolecule recognition problems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Vehicle routing problem and capacitated vehicle routing problem frameworks in fund allocation problem

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2016-11-01

    Two new methods adopted from methods commonly used in the field of transportation and logistics are proposed to solve a specific issue of investment allocation problem. Vehicle routing problem and capacitated vehicle routing methods are applied to optimize the fund allocation of a portfolio of investment assets. This is done by determining the sequence of the assets. As a result, total investment risk is minimized by this sequence.

  6. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  7. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  8. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  9. Scattering Amplitudes, the AdS/CFT Correspondence, Minimal Surfaces, and Integrability

    DOE PAGES

    Alday, Luis F.

    2010-01-01

    We focus on the computation of scattering amplitudes of planar maximally supersymmetric Yang-Mill in four dimensions at strong coupling by means of the AdS/CFT correspondence and explain how the problem boils down to the computation of minimal surfaces in AdS in the first part of this paper. In the second part of this review we explain how integrability allows to give a solution to the problem in terms of a set of integral equations. The intention of the review is to give a pedagogical, rather than very detailed, exposition.

  10. Propagation of Disturbances in Traffic Flow

    DOT National Transportation Integrated Search

    1977-09-01

    The system-optimized static traffic-assignment problem in a freeway corridor network is the problem of choosing a distribution of vehicles in the network to minimize average travel time. It is of interest to know how sensitive the optimal steady-stat...

  11. Putting Man in the Machine: Exploiting Expertise to Enhance Multiobjective Design of Water Supply Monitoring Network

    NASA Astrophysics Data System (ADS)

    Bode, F.; Nowak, W.; Reed, P. M.; Reuschen, S.

    2016-12-01

    Drinking-water well catchments need effective early-warning monitoring networks. Groundwater water supply wells in complex urban environments are in close proximity to a myriad of potential industrial pollutant sources that could irreversibly damage their source aquifers. These urban environments pose fiscal and physical challenges to designing monitoring networks. Ideal early-warning monitoring networks would satisfy three objectives: to detect (1) all potential contaminations within the catchment (2) as early as possible before they reach the pumping wells, (3) while minimizing costs. Obviously, the ideal case is nonexistent, so we search for tradeoffs using multiobjective optimization. The challenge of this optimization problem is the high number of potential monitoring-well positions (the search space) and the non-linearity of the underlying groundwater flow-and-transport problem. This study evaluates (1) different ways to effectively restrict the search space in an efficient way, with and without expert knowledge, (2) different methods to represent the search space during the optimization and (3) the influence of incremental increases in uncertainty in the system. Conductivity, regional flow direction and potential source locations are explored as key uncertainties. We show the need and the benefit of our methods by comparing optimized monitoring networks for different uncertainty levels with networks that seek to effectively exploit expert knowledge. The study's main contributions are the different approaches restricting and representing the search space. The restriction algorithms are based on a point-wise comparison of decision elements of the search space. The representation of the search space can be either binary or continuous. For both cases, the search space must be adjusted properly. Our results show the benefits and drawbacks of binary versus continuous search space representations and the high potential of automated search space restriction algorithms for high-dimensional, highly non-linear optimization problems.

  12. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  13. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  14. A Multiple Ant Colony Metahuristic for the Air Refueling Tanker Assignment Problem

    DTIC Science & Technology

    2002-03-01

    Problem The tanker assignment problem can be modeled as a job shop scheduling problem ( JSSP ). The JSSP is made up of n jobs, composed of m ordered...points) to be processed on all the machines (tankers). The problem with using JSSP is that the tanker assignment problem has multiple objectives... JSSP will minimize the time it takes for all jobs, but this may take an inordinate number of tankers. Thus using JSSP alone is not necessarily a good

  15. Evanescent field: A potential light-tool for theranostics application

    NASA Astrophysics Data System (ADS)

    Polley, Nabarun; Singh, Soumendra; Giri, Anupam; Pal, Samir Kumar

    2014-03-01

    A noninvasive or minimally invasive optical approach for theranostics, which would reinforce diagnosis, treatment, and preferably guidance simultaneously, is considered to be major challenge in biomedical instrument design. In the present work, we have developed an evanescent field-based fiber optic strategy for the potential theranostics application in hyperbilirubinemia, an increased concentration of bilirubin in the blood and is a potential cause of permanent brain damage or even death in newborn babies. Potential problem of bilirubin deposition on the hydroxylated fiber surface at physiological pH (7.4), that masks the sensing efficacy and extraction of information of the pigment level, has also been addressed. Removal of bilirubin in a blood-phantom (hemoglobin and human serum albumin) solution from an enhanced level of 77 μM/l (human jaundice >50 μM/l) to ˜30 μM/l (normal level ˜25 μM/l in human) using our strategy has been successfully demonstrated. In a model experiment using chromatography paper as a mimic of biological membrane, we have shown efficient degradation of the bilirubin under continuous monitoring for guidance of immediate/future course of action.

  16. A conservative scheme of drift kinetic electrons for gyrokinetic simulation of kinetic-MHD processes in toroidal plasmas

    NASA Astrophysics Data System (ADS)

    Bao, J.; Liu, D.; Lin, Z.

    2017-10-01

    A conservative scheme of drift kinetic electrons for gyrokinetic simulations of kinetic-magnetohydrodynamic processes in toroidal plasmas has been formulated and verified. Both vector potential and electron perturbed distribution function are decomposed into adiabatic part with analytic solution and non-adiabatic part solved numerically. The adiabatic parallel electric field is solved directly from the electron adiabatic response, resulting in a high degree of accuracy. The consistency between electrostatic potential and parallel vector potential is enforced by using the electron continuity equation. Since particles are only used to calculate the non-adiabatic response, which is used to calculate the non-adiabatic vector potential through Ohm's law, the conservative scheme minimizes the electron particle noise and mitigates the cancellation problem. Linear dispersion relations of the kinetic Alfvén wave and the collisionless tearing mode in cylindrical geometry have been verified in gyrokinetic toroidal code simulations, which show that the perpendicular grid size can be larger than the electron collisionless skin depth when the mode wavelength is longer than the electron skin depth.

  17. Improving Hospital-wide Patient Scheduling Decisions by Clinical Pathway Mining.

    PubMed

    Gartner, Daniel; Arnolds, Ines V; Nickel, Stefan

    2015-01-01

    Recent research has highlighted the need for solving hospital-wide patient scheduling problems. Inpatient scheduling, patient activities have to be scheduled on scarce hospital resources such that temporal relations between activities (e.g. for recovery times) are ensured. Common objectives are, among others, the minimization of the length of stay (LOS). In this paper, we consider a hospital-wide patient scheduling problem with LOS minimization based on uncertain clinical pathways. We approach the problem in three stages: First, we learn most likely clinical pathways using a sequential pattern mining approach. Second, we provide a mathematical model for patient scheduling and finally, we combine the two approaches. In an experimental study carried out using real-world data, we show that our approach outperforms baseline approaches on two metrics.

  18. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  19. Should preclinical typodonts be disinfected prior to grading?

    PubMed

    Aycock, Jeffrey E; Hill, Edward E

    2009-01-01

    This is a report of a unique finding in a preclinical laboratory that may be a potential dental school health hazard. Visual inspection (conducted in April 2008 by a preclinical crown and bridge course coordinator) of typodonts used by second-year students at the University of Mississippi School of Dentistry found that fourteen out of thirty-nine had black spots on the undersurface of the cheek shroud and/or plastic gingiva. The spots were cultured by the Medical Center's Department of Microbiology and described only as being mold/fungus typical of that which frequently grows in warm, moist, southern environments. Although indoor molds are common, about 5 percent of the general population will develop some type of mild allergic airway problem from molds over their lifetime. Mold on typodonts is unsightly, indicates failure of students to recognize the value of cleanliness in the dental environment, and may be a potential health hazard for some individuals. Cleaning and drying procedures for typodonts were implemented. The transfer of items between students and instructors during preclinical courses provides many opportunities for the spread of potentially harmful microorganisms/viruses. As a minimal level of personal protection, it is suggested that instructors wear disposable gloves and face masks and exercise hand washing between handling student instruments and typodonts. This problem has not been previously mentioned in the literature and merits further investigation/discussion.

  20. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  1. Higher Integrability for Minimizers of the Mumford-Shah Functional

    NASA Astrophysics Data System (ADS)

    De Philippis, Guido; Figalli, Alessio

    2014-08-01

    We prove higher integrability for the gradient of local minimizers of the Mumford-Shah energy functional, providing a positive answer to a conjecture of De Giorgi (Free discontinuity problems in calculus of variations. Frontiers in pure and applied mathematics, North-Holland, Amsterdam, pp 55-62, 1991).

  2. Understanding and Minimizing Staff Burnout. An Introductory Packet.

    ERIC Educational Resources Information Center

    California Univ., Los Angeles. Center for Mental Health Schools.

    Staff who bring a mental health perspective to the schools can deal with problems of staff burnout. This packet is designed to help in beginning the process of minimizing burnout, a process that requires reducing environmental stressors, increasing personal capabilities, and enhancing job supports. The packet opens with brief discussions of "What…

  3. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  4. Nonlinear modeling of wave-topography interactions, shear instabilities and shear induced wave breaking using vortex method

    NASA Astrophysics Data System (ADS)

    Guha, Anirban

    2017-11-01

    Theoretical studies on linear shear instabilities as well as different kinds of wave interactions often use simple velocity and/or density profiles (e.g. constant, piecewise) for obtaining good qualitative and quantitative predictions of the initial disturbances. Moreover, such simple profiles provide a minimal model to obtain a mechanistic understanding of shear instabilities. Here we have extended this minimal paradigm into nonlinear domain using vortex method. Making use of unsteady Bernoulli's equation in presence of linear shear, and extending Birkhoff-Rott equation to multiple interfaces, we have numerically simulated the interaction between multiple fully nonlinear waves. This methodology is quite general, and has allowed us to simulate diverse problems that can be essentially reduced to the minimal system with interacting waves, e.g. spilling and plunging breakers, stratified shear instabilities (Holmboe, Taylor-Caulfield, stratified Rayleigh), jet flows, and even wave-topography interaction problem like Bragg resonance. We found that the minimal models capture key nonlinear features (e.g. wave breaking features like cusp formation and roll-ups) which are observed in experiments and/or extensive simulations with smooth, realistic profiles.

  5. Stabilization of a locally minimal forest

    NASA Astrophysics Data System (ADS)

    Ivanov, A. O.; Mel'nikova, A. E.; Tuzhilin, A. A.

    2014-03-01

    The method of partial stabilization of locally minimal networks, which was invented by Ivanov and Tuzhilin to construct examples of shortest trees with given topology, is developed. According to this method, boundary vertices of degree 2 are not added to all edges of the original locally minimal tree, but only to some of them. The problem of partial stabilization of locally minimal trees in a finite-dimensional Euclidean space is solved completely in the paper, that is, without any restrictions imposed on the number of edges remaining free of subdivision. A criterion for the realizability of such stabilization is established. In addition, the general problem of searching for the shortest forest connecting a finite family of boundary compact sets in an arbitrary metric space is formalized; it is shown that such forests exist for any family of compact sets if and only if for any finite subset of the ambient space there exists a shortest tree connecting it. The theory developed here allows us to establish further generalizations of the stabilization theorem both for arbitrary metric spaces and for metric spaces with some special properties. Bibliography: 10 titles.

  6. Instrumental variable methods in comparative safety and effectiveness research.

    PubMed

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  7. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues.

    PubMed

    Farisco, Michele; Kotaleski, Jeanette H; Evers, Kathinka

    2018-01-01

    Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain's operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.

  8. EEG-guided meditation: A personalized approach.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Kallio-Tamminen, Tarja

    2015-12-01

    The therapeutic potential of meditation for physical and mental well-being is well documented, however the possibility of adverse effects warrants further discussion of the suitability of any particular meditation practice for every given participant. This concern highlights the need for a personalized approach in the meditation practice adjusted for a concrete individual. This can be done by using an objective screening procedure that detects the weak and strong cognitive skills in brain function, thus helping design a tailored meditation training protocol. Quantitative electroencephalogram (qEEG) is a suitable tool that allows identification of individual neurophysiological types. Using qEEG screening can aid developing a meditation training program that maximizes results and minimizes risk of potential negative effects. This brief theoretical-conceptual review provides a discussion of the problem and presents some illustrative results on the usage of qEEG screening for the guidance of mediation personalization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives

    PubMed Central

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-01-01

    Abstract Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the ‘change-in-estimate’ (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). PMID:27097747

  10. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues

    PubMed Central

    Farisco, Michele; Kotaleski, Jeanette H.; Evers, Kathinka

    2018-01-01

    Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs. PMID:29740372

  11. Novel approaches for road congestion minimization.

    DOT National Transportation Integrated Search

    2012-07-01

    Transportation planning is usually aiming to solve two problems: the traffic assignment and the toll pricing problems. The latter one utilizes information from the first one, in order to find the optimal set of tolls that is the set of tolls that lea...

  12. Minimization of transmission cost in decentralized control systems

    NASA Technical Reports Server (NTRS)

    Wang, S.-H.; Davison, E. J.

    1978-01-01

    This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.

  13. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  14. Noise Problems Associated with Ground Operations of Jet Aircraft

    NASA Technical Reports Server (NTRS)

    Hubbard, Harvey H.

    1959-01-01

    The nature of the noise-exposure problem for humans and the aircraft-structural-damage problem is each discussed briefly. Some discussion is directed toward available methods of minimizing the effects of noise on ground crews, on the aircraft structure, and on the surrounding community. A bibliography of available papers relating to noise-reduction devices is also included.

  15. Dynamic Restructuring Of Problems In Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.

    1992-01-01

    "Dynamic tradeoff evaluation" (DTE) denotes proposed method and procedure for restructuring problem-solving strategies in artificial intelligence to satisfy need for timely responses to changing conditions. Detects situations in which optimal problem-solving strategies cannot be pursued because of real-time constraints, and effects tradeoffs among nonoptimal strategies in such way to minimize adverse effects upon performance of system.

  16. Efficiency of unconstrained minimization techniques in nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.

  17. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  18. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  19. Heat transfer comparison of nanofluid filled transformer and traditional oil-immersed transformer

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    Dispersing nanoparticles with high thermal conductivity into transformer oil is an innovative approach to improve the thermal performance of traditional oil-immersed transformers. This mixture, also known as nanofluid, has shown the potential in practical application through experimental measurements. This paper presents the comparisons of nanofluid filled transformer and traditional oil-immersed transformer in terms of their computational fluid dynamics (CFD) solutions from the perspective of optimal design. Thermal performance of transformers with the same parameters except coolants is compared. A further comparison on heat transfer then is made after minimizing the oil volume and maximum temperature-rise of these two transformers. Adaptive multi-objective optimization method is employed to tackle this optimization problem.

  20. Children's food store, restaurant, and home food environments and their relationship with body mass index: a pilot study.

    PubMed

    Holsten, Joanna E; Compher, Charlene W

    2012-01-01

    This pilot research assessed the feasibility and utility of a study designed to examine the relationship between children's BMI and food store, restaurant, and home food environments. Home visits were conducted with sixth-grade children (N = 12). BMI z-scores were calculated with weight and height measurements. Nutrition Environment Measures Surveys evaluated children's food environments. The study protocol involved a feasible time duration, minimal missing data for primary variables, and participant satisfaction. Potential design problems included the homogeneous store environments and low restaurant exposure of the sample recruited from one school, and the adequacy of a single cross-sectional measure of the home environment.

  1. A dynamical weak scale from inflation

    NASA Astrophysics Data System (ADS)

    You, Tevong

    2017-09-01

    Dynamical scanning of the Higgs mass by an axion-like particle during inflation may provide a cosmological component to explaining part of the hierarchy problem. We propose a novel interplay of this cosmological relaxation mechanism with inflation, whereby the backreaction of the Higgs vacuum expectation value near the weak scale causes inflation to end. As Hubble drops, the relaxion's dissipative friction increases relative to Hubble and slows it down enough to be trapped by the barriers of its periodic potential. Such a scenario raises the natural cut-off of the theory up to ~ 1010 GeV, while maintaining a minimal relaxion sector without having to introduce additional scanning scalars or new physics coincidentally close to the weak scale.

  2. Stochastic optimal control as non-equilibrium statistical mechanics: calculus of variations over density and current

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.

    2014-01-01

    In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.

  3. Analysis of labor employment assessment on production machine to minimize time production

    NASA Astrophysics Data System (ADS)

    Hernawati, Tri; Suliawati; Sari Gumay, Vita

    2018-03-01

    Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.

  4. Multivariable frequency domain identification via 2-norm minimization

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1992-01-01

    The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.

  5. Iterative Potts and Blake–Zisserman minimization for the recovery of functions with discontinuities from indirect measurements

    PubMed Central

    Weinmann, Andreas; Storath, Martin

    2015-01-01

    Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074

  6. Planning Paths Through Singularities in the Center of Mass Space

    NASA Technical Reports Server (NTRS)

    Doggett, William R.; Messner, William C.; Juang, Jer-Nan

    1998-01-01

    The center of mass space is a convenient space for planning motions that minimize reaction forces at the robot's base or optimize the stability of a mechanism. A unique problem associated with path planning in the center of mass space is the potential existence of multiple center of mass images for a single Cartesian obstacle, since a single center of mass location can correspond to multiple robot joint configurations. The existence of multiple images results in a need to either maintain multiple center of mass obstacle maps or to update obstacle locations when the robot passes through a singularity, such as when it moves from an elbow-up to an elbow-down configuration. To illustrate the concepts presented in this paper, a path is planned for an example task requiring motion through multiple center of mass space maps. The object of the path planning algorithm is to locate the bang- bang acceleration profile that minimizes the robot's base reactions in the presence of a single Cartesian obstacle. To simplify the presentation, only non-redundant robots are considered and joint non-linearities are neglected.

  7. A Hamiltonian approach for the Thermodynamics of AdS black holes

    NASA Astrophysics Data System (ADS)

    Baldiotti, M. C.; Fresneda, R.; Molina, C.

    2017-07-01

    In this work we study the Thermodynamics of D-dimensional Schwarzschild-anti de Sitter (SAdS) black holes. The minimal Thermodynamics of the SAdS spacetime is briefly discussed, highlighting some of its strong points and shortcomings. The minimal SAdS Thermodynamics is extended within a Hamiltonian approach, by means of the introduction of an additional degree of freedom. We demonstrate that the cosmological constant can be introduced in the thermodynamic description of the SAdS black hole with a canonical transformation of the Schwarzschild problem, closely related to the introduction of an anti-de Sitter thermodynamic volume. The treatment presented is consistent, in the sense that it is compatible with the introduction of new thermodynamic potentials, and respects the laws of black hole Thermodynamics. By demanding homogeneity of the thermodynamic variables, we are able to construct a new equation of state that completely characterizes the Thermodynamics of SAdS black holes. The treatment naturally generates phenomenological constants that can be associated with different boundary conditions in underlying microscopic theories. A whole new set of phenomena can be expected from the proposed generalization of SAdS Thermodynamics.

  8. Why are dreams interesting for philosophers? The example of minimal phenomenal selfhood, plus an agenda for future research1

    PubMed Central

    Metzinger, Thomas

    2013-01-01

    This metatheoretical paper develops a list of new research targets by exploring particularly promising interdisciplinary contact points between empirical dream research and philosophy of mind. The central example is the MPS-problem. It is constituted by the epistemic goal of conceptually isolating and empirically grounding the phenomenal property of “minimal phenomenal selfhood,” which refers to the simplest form of self-consciousness. In order to precisely describe MPS, one must focus on those conditions that are not only causally enabling, but strictly necessary to bring it into existence. This contribution argues that research on bodiless dreams, asomatic out-of-body experiences, and full-body illusions has the potential to make decisive future contributions. Further items on the proposed list of novel research targets include differentiating the concept of a “first-person perspective” on the subcognitive level; investigating relevant phenomenological and neurofunctional commonalities between mind-wandering and dreaming; comparing the functional depth of embodiment across dream and wake states; and demonstrating that the conceptual consequences of cognitive corruption and systematic rationality deficits in the dream state are much more serious for philosophical epistemology (and, perhaps, the methodology of dream research itself) than commonly assumed. The paper closes by specifying a list of potentially innovative research goals that could serve to establish a stronger connection between dream research and philosophy of mind. PMID:24198793

  9. The historical bases of the Rayleigh and Ritz methods

    NASA Astrophysics Data System (ADS)

    Leissa, A. W.

    2005-11-01

    Rayleigh's classical book Theory of Sound was first published in 1877. In it are many examples of calculating fundamental natural frequencies of free vibration of continuum systems (strings, bars, beams, membranes, plates) by assuming the mode shape, and setting the maximum values of potential and kinetic energy in a cycle of motion equal to each other. This procedure is well known as "Rayleigh's Method." In 1908, Ritz laid out his famous method for determining frequencies and mode shapes, choosing multiple admissible displacement functions, and minimizing a functional involving both potential and kinetic energies. He then demonstrated it in detail in 1909 for the completely free square plate. In 1911, Rayleigh wrote a paper congratulating Ritz on his work, but stating that he himself had used Ritz's method in many places in his book and in another publication. Subsequently, hundreds of research articles and many books have appeared which use the method, some calling it the "Ritz method" and others the "Rayleigh-Ritz method." The present article examines the method in detail, as Ritz presented it, and as Rayleigh claimed to have used it. It concludes that, although Rayleigh did solve a few problems which involved minimization of a frequency, these solutions were not by the straightforward, direct method presented by Ritz and used subsequently by others. Therefore, Rayleigh's name should not be attached to the method.

  10. Distributed Power Allocation for Wireless Sensor Network Localization: A Potential Game Approach.

    PubMed

    Ke, Mingxing; Li, Ding; Tian, Shiwei; Zhang, Yuli; Tong, Kaixiang; Xu, Yuhua

    2018-05-08

    The problem of distributed power allocation in wireless sensor network (WSN) localization systems is investigated in this paper, using the game theoretic approach. Existing research focuses on the minimization of the localization errors of individual agent nodes over all anchor nodes subject to power budgets. When the service area and the distribution of target nodes are considered, finding the optimal trade-off between localization accuracy and power consumption is a new critical task. To cope with this issue, we propose a power allocation game where each anchor node minimizes the square position error bound (SPEB) of the service area penalized by its individual power. Meanwhile, it is proven that the power allocation game is an exact potential game which has one pure Nash equilibrium (NE) at least. In addition, we also prove the existence of an ϵ -equilibrium point, which is a refinement of NE and the better response dynamic approach can reach the end solution. Analytical and simulation results demonstrate that: (i) when prior distribution information is available, the proposed strategies have better localization accuracy than the uniform strategies; (ii) when prior distribution information is unknown, the performance of the proposed strategies outperforms power management strategies based on the second-order cone program (SOCP) for particular agent nodes after obtaining the estimated distribution of agent nodes. In addition, proposed strategies also provide an instructional trade-off between power consumption and localization accuracy.

  11. Why are dreams interesting for philosophers? The example of minimal phenomenal selfhood, plus an agenda for future research.

    PubMed

    Metzinger, Thomas

    2013-01-01

    This metatheoretical paper develops a list of new research targets by exploring particularly promising interdisciplinary contact points between empirical dream research and philosophy of mind. The central example is the MPS-problem. It is constituted by the epistemic goal of conceptually isolating and empirically grounding the phenomenal property of "minimal phenomenal selfhood," which refers to the simplest form of self-consciousness. In order to precisely describe MPS, one must focus on those conditions that are not only causally enabling, but strictly necessary to bring it into existence. This contribution argues that research on bodiless dreams, asomatic out-of-body experiences, and full-body illusions has the potential to make decisive future contributions. Further items on the proposed list of novel research targets include differentiating the concept of a "first-person perspective" on the subcognitive level; investigating relevant phenomenological and neurofunctional commonalities between mind-wandering and dreaming; comparing the functional depth of embodiment across dream and wake states; and demonstrating that the conceptual consequences of cognitive corruption and systematic rationality deficits in the dream state are much more serious for philosophical epistemology (and, perhaps, the methodology of dream research itself) than commonly assumed. The paper closes by specifying a list of potentially innovative research goals that could serve to establish a stronger connection between dream research and philosophy of mind.

  12. DEVELOPMENT OF A PORTABLE SOFTWARE LANGUAGE FOR PHYSIOLOGICALLY-BASED PHARMACOKINETIC (PBPK) MODELS

    EPA Science Inventory

    The PBPK modeling community has had a long-standing problem with modeling software compatibility. The numerous software packages used for PBPK models are, at best, minimally compatible. This creates problems ranging from model obsolescence due to software support discontinuation...

  13. The intrinsic antimicrobial activity of citric acid-coated manganese ferrite nanoparticles is enhanced after conjugation with the antifungal peptide Cm-p5

    PubMed Central

    Lopez-Abarrategui, Carlos; Figueroa-Espi, Viviana; Lugo-Alvarez, Maria B; Pereira, Caroline D; Garay, Hilda; Barbosa, João ARG; Falcão, Rosana; Jiménez-Hernández, Linnavel; Estévez-Hernández, Osvaldo; Reguera, Edilso; Franco, Octavio L; Dias, Simoni C; Otero-Gonzalez, Anselmo J

    2016-01-01

    Diseases caused by bacterial and fungal pathogens are among the major health problems in the world. Newer antimicrobial therapies based on novel molecules urgently need to be developed, and this includes the antimicrobial peptides. In spite of the potential of antimicrobial peptides, very few of them were able to be successfully developed into therapeutics. The major problems they present are molecule stability, toxicity in host cells, and production costs. A novel strategy to overcome these obstacles is conjugation to nanomaterial preparations. The antimicrobial activity of different types of nanoparticles has been previously demonstrated. Specifically, magnetic nanoparticles have been widely studied in biomedicine due to their physicochemical properties. The citric acid-modified manganese ferrite nanoparticles used in this study were characterized by high-resolution transmission electron microscopy, which confirmed the formation of nanocrystals of approximately 5 nm diameter. These nanoparticles were able to inhibit Candida albicans growth in vitro. The minimal inhibitory concentration was 250 µg/mL. However, the nanoparticles were not capable of inhibiting Gram-negative bacteria (Escherichia coli) or Gram-positive bacteria (Staphylococcus aureus). Finally, an antifungal peptide (Cm-p5) from the sea animal Cenchritis muricatus (Gastropoda: Littorinidae) was conjugated to the modified manganese ferrite nanoparticles. The antifungal activity of the conjugated nanoparticles was higher than their bulk counterparts, showing a minimal inhibitory concentration of 100 µg/mL. This conjugate proved to be nontoxic to a macrophage cell line at concentrations that showed antimicrobial activity. PMID:27563243

  14. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    NASA Astrophysics Data System (ADS)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  15. The Unintended Consequences of Social Media in Healthcare: New Problems and New Solutions

    PubMed Central

    Atique, S.; Mayer, M. A.; Denecke, K.; Merolli, M.; Househ, M.

    2016-01-01

    Summary Objectives Social media is increasingly being used in conjunction with health information technology (health IT). The objective of this paper is to identify some of the undesirable outcomes that arise from this integration and to suggest solutions to these problems. Methodology After a discussion with experts to elicit the topics that should be included in the survey, we performed a narrative review based on recent literature and interviewed multidisciplinary experts from different areas. In each case, we identified and analyzed the unintended effects of social media in health IT. Results Each analyzed topic provided a different set of unintended consequences. Most relevant consequences include lack of privacy with ethical and legal issues, patient confusion in disease management, poor information accuracy in crowdsourcing, unclear responsibilities, misleading and biased information in the prevention and detection of epidemics, and demotivation in gamified health solutions with social components. Conclusions Using social media in healthcare offers several benefits, but it is not exempt of potential problems, and not all of these problems have clear solutions. We recommend careful design of digital systems in order to minimize patient’s feelings of demotivation and frustration and we recommend following specific guidelines that should be created by all stakeholders in the healthcare ecosystem. PMID:27830230

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANI,SEETHAMBAL S.; FLEMING,JAMES G.; WALRAVEN,JEREMY A.

    Two major problems associated with Si-based MEMS (MicroElectroMechanical Systems) devices are stiction and wear. Surface modifications are needed to reduce both adhesion and friction in micromechanical structures to solve these problems. In this paper, the authors present a CVD (Chemical Vapor Deposition) process that selectively coats MEMS devices with tungsten and significantly enhances device durability. Tungsten CVD is used in the integrated-circuit industry, which makes this approach manufacturable. This selective deposition process results in a very conformal coating and can potentially address both stiction and wear problems confronting MEMS processing. The selective deposition of tungsten is accomplished through the siliconmore » reduction of WF{sub 6}. The self-limiting nature of the process ensures consistent process control. The tungsten is deposited after the removal of the sacrificial oxides to minimize stress and process integration problems. The tungsten coating adheres well and is hard and conducting, which enhances performance for numerous devices. Furthermore, since the deposited tungsten infiltrates under adhered silicon parts and the volume of W deposited is less than the amount of Si consumed, it appears to be possible to release adhered parts that are contacted over small areas such as dimples. The wear resistance of tungsten coated parts has been shown to be significantly improved by microengine test structures.« less

  17. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  18. A proposal to improve e-waste collection efficiency in urban mining: Container loading and vehicle routing problems - A case study of Poland.

    PubMed

    Nowakowski, Piotr

    2017-02-01

    Waste electrical and electronic equipment (WEEE), also known as e-waste, is one of the most important waste streams with high recycling potential. Materials used in these products are valuable, but some of them are hazardous. The urban mining approach attempts to recycle as many materials as possible, so efficiency in collection is vital. There are two main methods used to collect WEEE: stationary and mobile, each with different variants. The responsibility of WEEE organizations and waste collection companies is to assure all resources required for these activities - bins, containers, collection vehicles and staff - are available, taking into account cost minimization. Therefore, it is necessary to correctly determine the capacity of containers and number of collection vehicles for an area where WEEE need to be collected. There are two main problems encountered in collection, storage and transportation of WEEE: container loading problems and vehicle routing problems. In this study, an adaptation of these two models for packing and collecting WEEE is proposed, along with a practical implementation plan designed to be useful for collection companies' guidelines for container loading and route optimization. The solutions are presented in the case studies of real-world conditions for WEEE collection companies in Poland. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  20. Gambling and the Health of the Public: Adopting a Public Health Perspective.

    PubMed

    Korn, David A.; Shaffer, Howard J.

    1999-01-01

    During the last decade there has been an unprecedented expansion of legalized gambling throughout North America. Three primary forces appear to be motivating this growth: (1) the desire of governments to identify new sources of revenue without invoking new or higher taxes; (2) tourism entrepreneurs developing new destinations for entertainment and leisure; and (3) the rise of new technologies and forms of gambling (e.g., video lottery terminals, powerball mega-lotteries, and computer offshore gambling). Associated with this phenomenon, there has been an increase in the prevalence of problem and pathological gambling among the general adult population, as well as a sustained high level of gambling-related problems among youth. To date there has been little dialogue within the public health sector in particular, or among health care practitioners in general, about the potential health impact of gambling or gambling-related problems. This article encourages the adoption of a public health perspective towards gambling. More specifically, this discussion has four primary objectives:1. Create awareness among health professionals about gambling, its rapid expansion and its relationship with the health care system;2. Place gambling within a public health framework by examining it from several perspectives, including population health, human ecology and addictive behaviors;3. Outline the major public health issues about how gambling can affect individuals, families and communities;4. Propose an agenda for strengthening policy, prevention and treatment practices through greater public health involvement, using the framework of The Ottawa Charter for Health Promotion as a guide.By understanding gambling and its potential impacts on the public's health, policy makers and health practitioners can minimize gambling's negative impacts and appreciate its potential benefits.

Top