Sample records for variation minimization problems

  1. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  2. Minimizing the Sum of Completion Times with Resource Dependant Times

    NASA Astrophysics Data System (ADS)

    Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe

    2008-10-01

    We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.

  3. Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture

    NASA Astrophysics Data System (ADS)

    Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan

    2015-09-01

    This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.

  4. One-dimensional Gromov minimal filling problem

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandr O.; Tuzhilin, Alexey A.

    2012-05-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  5. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  6. Geometric constrained variational calculus. II: The second variation (Part I)

    NASA Astrophysics Data System (ADS)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  7. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  8. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  9. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  10. Existence of evolutionary variational solutions via the calculus of variations

    NASA Astrophysics Data System (ADS)

    Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo

    In this paper we introduce a purely variational approach to time dependent problems, yielding the existence of global parabolic minimizers, that is ∫0T ∫Ω [uṡ∂tφ+f(x,Du)] dx dt⩽∫0T ∫Ω f(x,Du+Dφ) dx dt, whenever T>0 and φ∈C0∞(Ω×(0,T),RN). For the integrand f:Ω×R→[0,∞] we merely assume convexity with respect to the gradient variable and coercivity. These evolutionary variational solutions are obtained as limits of maps depending on space and time minimizing certain convex variational functionals. In the simplest situation, with some growth conditions on f, the method provides the existence of global weak solutions to Cauchy-Dirichlet problems of parabolic systems of the type ∂tu-divDξf(x,Du)=0 in Ω×(0,∞).

  11. On the Support of Minimizers of Causal Variational Principles

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Schiefeneder, Daniela

    2013-11-01

    A class of causal variational principles on a compact manifold is introduced and analyzed both numerically and analytically. It is proved under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed and explicit analysis of the minimizers. On the sphere, we get a connection to packing problems and the Tammes distribution. Moreover, the minimal action is estimated from above and below.

  12. Accurate sparse-projection image reconstruction via nonlocal TV regularization.

    PubMed

    Zhang, Yi; Zhang, Weihua; Zhou, Jiliu

    2014-01-01

    Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.

  13. Image denoising by a direct variational minimization

    NASA Astrophysics Data System (ADS)

    Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan

    2011-12-01

    In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  14. Higher Integrability for Minimizers of the Mumford-Shah Functional

    NASA Astrophysics Data System (ADS)

    De Philippis, Guido; Figalli, Alessio

    2014-08-01

    We prove higher integrability for the gradient of local minimizers of the Mumford-Shah energy functional, providing a positive answer to a conjecture of De Giorgi (Free discontinuity problems in calculus of variations. Frontiers in pure and applied mathematics, North-Holland, Amsterdam, pp 55-62, 1991).

  15. Variational principle for the Navier-Stokes equations.

    PubMed

    Kerswell, R R

    1999-05-01

    A variational principle is presented for the Navier-Stokes equations in the case of a contained boundary-driven, homogeneous, incompressible, viscous fluid. Based upon making the fluid's total viscous dissipation over a given time interval stationary subject to the constraint of the Navier-Stokes equations, the variational problem looks overconstrained and intractable. However, introducing a nonunique velocity decomposition, u(x,t)=phi(x,t) + nu(x,t), "opens up" the variational problem so that what is presumed a single allowable point over the velocity domain u corresponding to the unique solution of the Navier-Stokes equations becomes a surface with a saddle point over the extended domain (phi,nu). Complementary or dual variational problems can then be constructed to estimate this saddle point value strictly from above as part of a minimization process or below via a maximization procedure. One of these reduced variational principles is the natural and ultimate generalization of the upper bounding problem developed by Doering and Constantin. The other corresponds to the ultimate Busse problem which now acts to lower bound the true dissipation. Crucially, these reduced variational problems require only the solution of a series of linear problems to produce bounds even though their unique intersection is conjectured to correspond to a solution of the nonlinear Navier-Stokes equations.

  16. Minimizing the Diameter of a Network Using Shortcut Edges

    NASA Astrophysics Data System (ADS)

    Demaine, Erik D.; Zadimoghaddam, Morteza

    We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.

  17. Dogs Don't Need Calculus

    ERIC Educational Resources Information Center

    Bolt, Mike

    2010-01-01

    Many optimization problems can be solved without resorting to calculus. This article develops a new variational method for optimization that relies on inequalities. The method is illustrated by four examples, the last of which provides a completely algebraic solution to the problem of minimizing the time it takes a dog to retrieve a thrown ball,…

  18. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  19. An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional

    NASA Astrophysics Data System (ADS)

    van Gennip, Yves

    2018-06-01

    We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.

  20. Geometric constrained variational calculus. III: The second variation (Part II)

    NASA Astrophysics Data System (ADS)

    Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico

    2016-03-01

    The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.

  1. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  2. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  3. Elimination of RF inhomogeneity effects in segmentation.

    PubMed

    Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay

    2007-01-01

    There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.

  4. Solution of an optimal control lifting body entry problem by an improved method of perturbation functions

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1975-01-01

    This paper presents a solution to a complex lifting reentry three-degree-of-freedom problem by using the calculus of variations to minimize the integral of the sum of the aerodynamics loads and heat rate input to the vehicle. The entry problem considered does not have state and/or control constraints along the trajectory. The calculus of variations method applied to this problem gives rise to a set of necessary conditions which are used to formulate a two point boundary value (TPBV) problem. This TPBV problem is then numerically solved by an improved method of perturbation functions (IMPF) using several starting co-state vectors. These vectors were chosen so that each one had a larger norm with respect to show how the envelope of convergence is significantly increased using this method and cases are presented to point this out.

  5. Discrete diffraction managed solitons: Threshold phenomena and rapid decay for general nonlinearities

    NASA Astrophysics Data System (ADS)

    Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran

    2017-10-01

    We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].

  6. Selected Bibliography on Optimizing Techniques in Statistics

    DTIC Science & Technology

    1981-08-01

    problems in business, industry and .ogovern nt ae f rmulated as optimization problem. Topics in optimization constitute an essential area of study in...numerical, iii) mathematical programming, and (iv) variational. We provide pertinent references with statistical applications Sin the above areas in Part I...TMS Advanced Studies in Managentnt Sciences, North-Holland PIIENli iiiany, Amsterdam. (To appear.) Spang, H. A. (1962). A review of minimization

  7. Free time minimizers for the three-body problem

    NASA Astrophysics Data System (ADS)

    Moeckel, Richard; Montgomery, Richard; Sánchez Morgado, Héctor

    2018-03-01

    Free time minimizers of the action (called "semi-static" solutions by Mañe in International congress on dynamical systems in Montevideo (a tribute to Ricardo Mañé), vol 362, pp 120-131, 1996) play a central role in the theory of weak KAM solutions to the Hamilton-Jacobi equation (Fathi in Weak KAM Theorem in Lagrangian Dynamics Preliminary Version Number 10, 2017). We prove that any solution to Newton's three-body problem which is asymptotic to Lagrange's parabolic homothetic solution is eventually a free time minimizer. Conversely, we prove that every free time minimizer tends to Lagrange's solution, provided the mass ratios lie in a certain large open set of mass ratios. We were inspired by the work of Da Luz and Maderna (Math Proc Camb Philos Soc 156:209-227, 1980) which showed that every free time minimizer for the N-body problem is parabolic and therefore must be asymptotic to the set of central configurations. We exclude being asymptotic to Euler's central configurations by a second variation argument. Central configurations correspond to rest points for the McGehee blown-up dynamics. The large open set of mass ratios are those for which the linearized dynamics at each Euler rest point has a complex eigenvalue.

  8. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  9. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  10. Approximation of a Brittle Fracture Energy with a Constraint of Non-interpenetration

    NASA Astrophysics Data System (ADS)

    Chambolle, Antonin; Conti, Sergio; Francfort, Gilles A.

    2018-06-01

    Linear fracture mechanics (or at least the initiation part of that theory) can be framed in a variational context as a minimization problem over an SBD type space. The corresponding functional can in turn be approximated in the sense of {Γ}-convergence by a sequence of functionals involving a phase field as well as the displacement field. We show that a similar approximation persists if additionally imposing a non-interpenetration constraint in the minimization, namely that only nonnegative normal jumps should be permissible.

  11. Variational Implicit Solvation with Solute Molecular Mechanics: From Diffuse-Interface to Sharp-Interface Models.

    PubMed

    Li, Bo; Zhao, Yanxiang

    2013-01-01

    Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation.

  12. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  13. Iterative Potts and Blake–Zisserman minimization for the recovery of functions with discontinuities from indirect measurements

    PubMed Central

    Weinmann, Andreas; Storath, Martin

    2015-01-01

    Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074

  14. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  15. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  16. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  17. Modified Cheeger and Ratio Cut Methods Using the Ginzburg-Landau Functional for Classification of High-Dimensional Data

    DTIC Science & Technology

    2016-02-01

    Modified Cheeger and Ratio Cut Methods Using the Ginzburg-Landau Functional for Classification of High-Dimensional Data Ekaterina Merkurjev*, Andrea...bertozzi@math.ucla.edu, xiaoran@isi.edu, lerman@isi.edu. Abstract Recent advances in clustering have included continuous relaxations of the Cheeger cut ...fully nonlinear Cheeger cut problem, as well as the ratio cut optimization task. Both problems are connected to total variation minimization, and the

  18. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  19. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  20. Minimal entropy reconstructions of thermal images for emissivity correction

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.

    1999-03-01

    Low emissivity with corresponding low thermal emission is a problem which has long afflicted infrared thermography. The problem is aggravated by reflected thermal energy which increases as the emissivity decreases, thus reducing the net signal-to-noise ratio, which degrades the resulting temperature reconstructions. Additional errors are introduced from the traditional emissivity-correction approaches, wherein one attempts to correct for emissivity either using thermocouples or using one or more baseline images, collected at known temperatures. These corrections are numerically equivalent to image differencing. Errors in the baseline images are therefore additive, causing the resulting measurement error to either double or triple. The practical application of thermal imagery usually entails coating the objective surface to increase the emissivity to a uniform and repeatable value. While the author recommends that the thermographer still adhere to this practice, he has devised a minimal entropy reconstructions which not only correct for emissivity variations, but also corrects for variations in sensor response, using the baseline images at known temperatures to correct for these values. The minimal energy reconstruction is actually based on a modified Hopfield neural network which finds the resulting image which best explains the observed data and baseline data, having minimal entropy change between adjacent pixels. The autocorrelation of temperatures between adjacent pixels is a feature of most close-up thermal images. A surprising result from transient heating data indicates that the resulting corrected thermal images have less measurement error and are closer to the situational truth than the original data.

  1. An iterative algorithm for L1-TV constrained regularization in image restoration

    NASA Astrophysics Data System (ADS)

    Chen, K.; Loli Piccolomini, E.; Zama, F.

    2015-11-01

    We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.

  2. On the heteroclinic connection problem for multi-well gradient systems

    NASA Astrophysics Data System (ADS)

    Zuniga, Andres; Sternberg, Peter

    2016-10-01

    We revisit the existence problem of heteroclinic connections in RN associated with Hamiltonian systems involving potentials W :RN → R having several global minima. Under very mild assumptions on W we present a simple variational approach to first find geodesics minimizing length of curves joining any two of the potential wells, where length is computed with respect to a degenerate metric having conformal factor √{ W}. Then we show that when such a minimizing geodesic avoids passing through other wells of the potential at intermediate times, it gives rise to a heteroclinic connection between the two wells. This work improves upon the approach of [22] and represents a more geometric alternative to the approaches of e.g. [5,10,14,17] for finding such connections.

  3. On a Minimum Problem in Smectic Elastomers

    NASA Astrophysics Data System (ADS)

    Buonsanti, Michele; Giovine, Pasquale

    2008-07-01

    Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.

  4. Automated design of minimum drag light aircraft fuselages and nacelles

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.; Karlin, B. E.

    1982-01-01

    The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.

  5. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  6. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  7. New periodic solutions for some planar N + 3-body problems with Newtonian potentials

    NASA Astrophysics Data System (ADS)

    Yuan, Pengfei; Zhang, Shiqing

    2018-03-01

    For some planar Newtonian N + 3-body problems, we use variational minimization methods to prove the existence of new periodic solutions satisfying that N bodies chase each other on a curve, and the other 3 bodies chase each other on another curve. From the definition of orbit spaces in our paper, we can find that they are new solutions which are also different from all the examples of Ferrario and Terracini (2004).

  8. Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting

    NASA Astrophysics Data System (ADS)

    Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt

    2018-04-01

    We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.

  9. Mothers of children with developmental disabilities: stress in early and middle childhood.

    PubMed

    Azad, Gazi; Blacher, Jan; Marcoulides, George A

    2013-10-01

    Using a sample of 219 families of children with (n=94) and without (n=125) developmental disabilities, this study examined the longitudinal perspectives of maternal stress in early (ages 3-5) and middle childhood (ages 6-13) and its relationship to mothers' and children's characteristics. Multivariate latent curve models indicated that maternal stress remained high and stable with minimal individual variation in early childhood, but declined with significant individual variation in middle childhood. Maternal stress at the beginning of middle childhood was associated with earlier maternal stress, as well as children's behavioral problems and social skills. The trajectory of maternal stress across middle childhood was related to children's behavioral problems. Implications for interventions are discussed. Copyright © 2013. Published by Elsevier Ltd.

  10. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  11. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  12. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  13. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  14. Unified anomaly suppression and boundary extraction in laser radar range imagery based on a joint curve-evolution and expectation-maximization algorithm.

    PubMed

    Feng, Haihua; Karl, William Clem; Castañon, David A

    2008-05-01

    In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.

  15. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  16. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  17. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

    PubMed

    Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

    2015-01-01

    Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

  18. A constrained registration problem based on Ciarlet-Geymonat stored energy

    NASA Astrophysics Data System (ADS)

    Derfoul, Ratiba; Le Guyader, Carole

    2014-03-01

    In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke's law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of 􀀀-convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.

  19. Steady-state groundwater recharge in trapezoidal-shaped aquifers: A semi-analytical approach based on variational calculus

    NASA Astrophysics Data System (ADS)

    Mahdavi, Ali; Seyyedian, Hamid

    2014-05-01

    This study presents a semi-analytical solution for steady groundwater flow in trapezoidal-shaped aquifers in response to an areal diffusive recharge. The aquifer is homogeneous, anisotropic and interacts with four surrounding streams of constant-head. Flow field in this laterally bounded aquifer-system is efficiently constructed by means of variational calculus. This is accomplished by minimizing a properly defined penalty function for the associated boundary value problem. Simple yet demonstrative scenarios are defined to investigate anisotropy effects on the water table variation. Qualitative examination of the resulting equipotential contour maps and velocity vector field illustrates the validity of the method, especially in the vicinity of boundary lines. Extension to the case of triangular-shaped aquifer with or without an impervious boundary line is also demonstrated through a hypothetical example problem. The present solution benefits from an extremely simple mathematical expression and exhibits strictly close agreement with the numerical results obtained from Modflow. Overall, the solution may be used to conduct sensitivity analysis on various hydrogeological parameters that affect water table variation in aquifers defined in trapezoidal or triangular-shaped domains.

  20. A limited-angle CT reconstruction method based on anisotropic TV minimization.

    PubMed

    Chen, Zhiqiang; Jin, Xin; Li, Liang; Wang, Ge

    2013-04-07

    This paper presents a compressed sensing (CS)-inspired reconstruction method for limited-angle computed tomography (CT). Currently, CS-inspired CT reconstructions are often performed by minimizing the total variation (TV) of a CT image subject to data consistency. A key to obtaining high image quality is to optimize the balance between TV-based smoothing and data fidelity. In the case of the limited-angle CT problem, the strength of data consistency is angularly varying. For example, given a parallel beam of x-rays, information extracted in the Fourier domain is mostly orthogonal to the direction of x-rays, while little is probed otherwise. However, the TV minimization process is isotropic, suggesting that it is unfit for limited-angle CT. Here we introduce an anisotropic TV minimization method to address this challenge. The advantage of our approach is demonstrated in numerical simulation with both phantom and real CT images, relative to the TV-based reconstruction.

  1. The Thermal Equilibrium Solution of a Generic Bipolar Quantum Hydrodynamic Model

    NASA Astrophysics Data System (ADS)

    Unterreiter, Andreas

    The thermal equilibrium state of a bipolar, isothermic quantum fluid confined to a bounded domain ,d = 1,2 or d = 3 is entirely described by the particle densities n, p, minimizing the energy where G1,2 are strictly convex real valued functions, . It is shown that this variational problem has a unique minimizer in and some regularity results are proven. The semi-classical limit is carried out recovering the minimizer of the limiting functional. The subsequent zero space charge limit leads to extensions of the classical boundary conditions. Due to the lack of regularity the asymptotics can not be settled on Sobolev embedding arguments. The limit is carried out by means of a compactness-by-convexity principle.

  2. Multi-level adaptive finite element methods. 1: Variation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1979-01-01

    A general numerical strategy for solving partial differential equations and other functional problems by cycling between coarser and finer levels of discretization is described. Optimal discretization schemes are provided together with very fast general solvers. It is described in terms of finite element discretizations of general nonlinear minimization problems. The basic processes (relaxation sweeps, fine-grid-to-coarse-grid transfers of residuals, coarse-to-fine interpolations of corrections) are directly and naturally determined by the objective functional and the sequence of approximation spaces. The natural processes, however, are not always optimal. Concrete examples are given and some new techniques are reviewed. Including the local truncation extrapolation and a multilevel procedure for inexpensively solving chains of many boundary value problems, such as those arising in the solution of time-dependent problems.

  3. Development of a Test Facility for Air Revitalization Technology Evaluation

    NASA Technical Reports Server (NTRS)

    Lu, Sao-Dung; Lin, Amy; Campbell, Melissa; Smith, Frederick

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  4. Gain-Scheduled Fault Tolerance Control Under False Identification

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine (Technical Monitor)

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  5. Weak convergence of a projection algorithm for variational inequalities in a Banach space

    NASA Astrophysics Data System (ADS)

    Iiduka, Hideaki; Takahashi, Wataru

    2008-03-01

    Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.

  6. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  7. Symmetric Trajectories for the 2N-Body Problem with Equal Masses

    NASA Astrophysics Data System (ADS)

    Terracini, Susanna; Venturelli, Andrea

    2007-06-01

    We consider the problem of 2 N bodies of equal masses in mathbb{R}^3 for the Newtonian-like weak-force potential r -σ, and we prove the existence of a family of collision-free nonplanar and nonhomographic symmetric solutions that are periodic modulo rotations. In addition, the rotation number with respect to the vertical axis ranges in a suitable interval. These solutions have the hip-hop symmetry, a generalization of that introduced in [19], for the case of many bodies and taking account of a topological constraint. The argument exploits the variational structure of the problem, and is based on the minimization of Lagrangian action on a given class of paths.

  8. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

  9. Nonlinear stability of Gardner breathers

    NASA Astrophysics Data System (ADS)

    Alejo, Miguel A.

    2018-01-01

    We show that breather solutions of the Gardner equation, a natural generalization of the KdV and mKdV equations, are H2 (R) stable. Through a variational approach, we characterize Gardner breathers as minimizers of a new Lyapunov functional and we study the associated spectral problem, through (i) the analysis of the spectrum of explicit linear systems (spectral stability), and (ii) controlling degenerated directions by using low regularity conservation laws.

  10. Resource Control in Large-Scale Mobile-Agents Systems

    DTIC Science & Technology

    2005-07-01

    wakeup node schedule , much energy can be conserved. We also designed several protocols for global clock synchronization. The most interesting one is...choice as to which remote hosts to visit and in which order. Scheduling mobile-agent migration in a way that minimizes bandwidth and other resource...use, therefore, is both feasible and attractive. Dartmouth considered several variations of the scheduling problem, and devel- oped an algorithm for

  11. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  12. The maximum work principle regarded as a consequence of an optimization problem based on mechanical virtual power principle and application of constructal theory

    NASA Astrophysics Data System (ADS)

    Gavrus, Adinel

    2017-10-01

    This scientific paper proposes to prove that the maximum work principle used by theory of continuum media plasticity can be regarded as a consequence of an optimization problem based on constructal theory (prof. Adrian BEJAN). It is known that the thermodynamics define the conservation of energy and the irreversibility of natural systems evolution. From mechanical point of view the first one permits to define the momentum balance equation, respectively the virtual power principle while the second one explains the tendency of all currents to flow from high to low values. According to the constructal law all finite-size system searches to evolve in such configurations that flow more and more easily over time distributing the imperfections in order to maximize entropy and to minimize the losses or dissipations. During a material forming process the application of constructal theory principles leads to the conclusion that under external loads the material flow is that which all dissipated mechanical power (deformation and friction) become minimal. On a mechanical point of view it is then possible to formulate the real state of all mechanical variables (stress, strain, strain rate) as those that minimize the total dissipated power. So between all other virtual non-equilibrium states, the real state minimizes the total dissipated power. It can be then obtained a variational minimization problem and this paper proof in a mathematical sense that starting from this formulation can be finding in a more general form the maximum work principle together with an equivalent form for the friction term. An application in the case of a plane compression of a plastic material shows the feasibility of the proposed minimization problem formulation to find analytical solution for both cases: one without friction influence and a second which take into account Tresca friction law. To valid the proposed formulation, a comparison with a classical analytical analysis based on slices, upper/lower bound methods and numerical Finite Element simulation is also presented.

  13. Geometric Variational Methods for Controlled Active Vision

    DTIC Science & Technology

    2006-08-01

    Haker , L. Zhu, and A. Tannenbaum, ``Optimal mass transport for registration and warping’’ Int. Journal Computer Vision, volume 60, 2004, pp. 225-240. G...pp. 119-142. A. Angenent, S. Haker , and A. Tannenbaum, ``Minimizing flows for the Monge-Kantorovich problem,’’ SIAM J. Math. Analysis, volume 35...Shape analysis of structures using spherical wavelets’’ (with S. Haker and D. Nain), Proceeedings of MICCAI, 2005. ``Affine surface evolution for 3D

  14. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  15. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  16. A continuum of periodic solutions to the planar four-body problem with two pairs of equal masses

    NASA Astrophysics Data System (ADS)

    Ouyang, Tiancheng; Xie, Zhifu

    2018-04-01

    In this paper, we apply the variational method with Structural Prescribed Boundary Conditions (SPBC) to prove the existence of periodic and quasi-periodic solutions for the planar four-body problem with two pairs of equal masses m1 =m3 and m2 =m4. A path q (t) on [ 0 , T ] satisfies the SPBC if the boundaries q (0) ∈ A and q (T) ∈ B, where A and B are two structural configuration spaces in (R2)4 and they depend on a rotation angle θ ∈ (0 , 2 π) and the mass ratio μ = m2/m1 ∈R+. We show that there is a region Ω ⊆ (0 , 2 π) ×R+ such that there exists at least one local minimizer of the Lagrangian action functional on the path space satisfying the SPBC { q (t) ∈H1 ([ 0 , T ] ,(R2)4) | q (0) ∈ A , q (T) ∈ B } for any (θ , μ) ∈ Ω. The corresponding minimizing path of the minimizer can be extended to a non-homographic periodic solution if θ is commensurable with π or a quasi-periodic solution if θ is not commensurable with π. In the variational method with the SPBC, we only impose constraints on the boundary and we do not impose any symmetry constraint on solutions. Instead, we prove that our solutions that are extended from the initial minimizing paths possess certain symmetries. The periodic solutions can be further classified as simple choreographic solutions, double choreographic solutions and non-choreographic solutions. Among the many stable simple choreographic orbits, the most extraordinary one is the stable star pentagon choreographic solution when (θ , μ) = (4 π/5, 1). Remarkably the unequal-mass variants of the stable star pentagon are just as stable as the equal mass choreographies.

  17. Bounds on complex polarizabilities and a new perspective on scattering by a lossy inclusion

    NASA Astrophysics Data System (ADS)

    Milton, Graeme W.

    2017-09-01

    Here, we obtain explicit formulas for bounds on the complex electrical polarizability at a given frequency of an inclusion with known volume that follow directly from the quasistatic bounds of Bergman and Milton on the effective complex dielectric constant of a two-phase medium. We also describe how analogous bounds on the orientationally averaged bulk and shear polarizabilities at a given frequency can be obtained from bounds on the effective complex bulk and shear moduli of a two-phase medium obtained by Milton, Gibiansky, and Berryman, using the quasistatic variational principles of Cherkaev and Gibiansky. We also show how the polarizability problem and the acoustic scattering problem can both be reformulated in an abstract setting as "Y problems." In the acoustic scattering context, to avoid explicit introduction of the Sommerfeld radiation condition, we introduce auxiliary fields at infinity and an appropriate "constitutive law" there, which forces the Sommerfeld radiation condition to hold. As a consequence, we obtain minimization variational principles for acoustic scattering that can be used to obtain bounds on the complex backwards scattering amplitude. Some explicit elementary bounds are given.

  18. Phase retrieval from intensity-only data by relative entropy minimization.

    PubMed

    Deming, Ross W

    2007-11-01

    A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.

  19. Feedback control for unsteady flow and its application to the stochastic Burgers equation

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon; Temam, Roger; Moin, Parviz; Kim, John

    1993-01-01

    The study applies mathematical methods of control theory to the problem of control of fluid flow with the long-range objective of developing effective methods for the control of turbulent flows. Model problems are employed through the formalism and language of control theory to present the procedure of how to cast the problem of controlling turbulence into a problem in optimal control theory. Methods of calculus of variations through the adjoint state and gradient algorithms are used to present a suboptimal control and feedback procedure for stationary and time-dependent problems. Two types of controls are investigated: distributed and boundary controls. Several cases of both controls are numerically simulated to investigate the performances of the control algorithm. Most cases considered show significant reductions of the costs to be minimized. The dependence of the control algorithm on the time-descretization method is discussed.

  20. Minimal Absent Words in Four Human Genome Assemblies

    PubMed Central

    Garcia, Sara P.; Pinho, Armando J.

    2011-01-01

    Minimal absent words have been computed in genomes of organisms from all domains of life. Here, we aim to contribute to the catalogue of human genomic variation by investigating the variation in number and content of minimal absent words within a species, using four human genome assemblies. We compare the reference human genome GRCh37 assembly, the HuRef assembly of the genome of Craig Venter, the NA12878 assembly from cell line GM12878, and the YH assembly of the genome of a Han Chinese individual. We find the variation in number and content of minimal absent words between assemblies more significant for large and very large minimal absent words, where the biases of sequencing and assembly methodologies become more pronounced. Moreover, we find generally greater similarity between the human genome assemblies sequenced with capillary-based technologies (GRCh37 and HuRef) than between the human genome assemblies sequenced with massively parallel technologies (NA12878 and YH). Finally, as expected, we find the overall variation in number and content of minimal absent words within a species to be generally smaller than the variation between species. PMID:22220210

  1. Stochastic optimal control as non-equilibrium statistical mechanics: calculus of variations over density and current

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Chertkov, Michael; Bierkens, Joris; Kappen, Hilbert J.

    2014-01-01

    In stochastic optimal control (SOC) one minimizes the average cost-to-go, that consists of the cost-of-control (amount of efforts), cost-of-space (where one wants the system to be) and the target cost (where one wants the system to arrive), for a system participating in forced and controlled Langevin dynamics. We extend the SOC problem by introducing an additional cost-of-dynamics, characterized by a vector potential. We propose derivation of the generalized gauge-invariant Hamilton-Jacobi-Bellman equation as a variation over density and current, suggest hydrodynamic interpretation and discuss examples, e.g., ergodic control of a particle-within-a-circle, illustrating non-equilibrium space-time complexity.

  2. Solving Variational Problems and Partial Differential Equations Mapping into General Target Manifolds

    DTIC Science & Technology

    2002-01-01

    1998. [36] T. Sakai, Riemannian Geometry, AMS Translations of Mathematical Monographs, vol 149. [37] N. Sochen, R . Kimmel, and R , Malladi , “A general...matical Physics 107, pp. 649-705, 1986. [5] V. Caselles, R . Kimmel, G. Sapiro, and C. Sbert, “Minimal surfaces based object segmentation,” IEEE- PAMI...June 2000 [9] R . Cohen, R . M. Hardt, D. Kinderlehrer, S. Y. Lin, and M. Luskin, “Minimum energy configurations for liquid crystals: Computational

  3. A Variational Principle for Reconstruction of Elastic Deformations in Shear Deformable Plates and Shells

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Spangler, Jan L.

    2003-01-01

    A variational principle is formulated for the inverse problem of full-field reconstruction of three-dimensional plate/shell deformations from experimentally measured surface strains. The formulation is based upon the minimization of a least squares functional that uses the complete set of strain measures consistent with linear, first-order shear-deformation theory. The formulation, which accommodates for transverse shear deformation, is applicable for the analysis of thin and moderately thick plate and shell structures. The main benefit of the variational principle is that it is well suited for C(sup 0)-continuous displacement finite element discretizations, thus enabling the development of robust algorithms for application to complex civil and aeronautical structures. The methodology is especially aimed at the next generation of aerospace vehicles for use in real-time structural health monitoring systems.

  4. Linear Matrix Inequality Method for a Quadratic Performance Index Minimization Problem with a class of Bilinear Matrix Inequality Conditions

    NASA Astrophysics Data System (ADS)

    Tanemura, M.; Chida, Y.

    2016-09-01

    There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.

  5. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  6. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    NASA Astrophysics Data System (ADS)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  7. Improving the performance of minimizers and winnowing schemes

    PubMed Central

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-01-01

    Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970

  8. STME Hydrogen Mixer Study

    NASA Technical Reports Server (NTRS)

    Blumenthal, Rob; Kim, Dongmoon; Bache, George

    1992-01-01

    The hydrogen mixer for the Space Transportation Main Engine is used to mix cold hydrogen bypass flow with warm hydrogen coolant chamber gas, which is then fed to the injectors. It is very important to have a uniform fuel temperature at the injectors in order to minimize mixture ratio problems due to the fuel density variations. In addition, the fuel at the injector has certain total pressure requirements. In order to achieve these objectives, the hydrogen mixer must provide a thoroughly mixed fluid with a minimum pressure loss. The AEROVISC computational fluid dynamics (CFD) code was used to analyze the STME hydrogen mixer, and proved to be an effective tool in optimizing the mixer design. AEROVISC, which solves the Reynolds Stress-Averaged Navier-Stokes equations in primitive variable form, was used to assess the effectiveness of different mixer designs. Through a parametric study of mixer design variables, an optimal design was selected which minimized mixed fuel temperature variation and fuel mixer pressure loss. The use of CFD in the design process of the STME hydrogen mixer was effective in achieving an optimal mixer design while reducing the amount of hardware testing.

  9. Improvements on ν-Twin Support Vector Machine.

    PubMed

    Khemchandani, Reshma; Saigal, Pooja; Chandra, Suresh

    2016-07-01

    In this paper, we propose two novel binary classifiers termed as "Improvements on ν-Twin Support Vector Machine: Iν-TWSVM and Iν-TWSVM (Fast)" that are motivated by ν-Twin Support Vector Machine (ν-TWSVM). Similar to ν-TWSVM, Iν-TWSVM determines two nonparallel hyperplanes such that they are closer to their respective classes and are at least ρ distance away from the other class. The significant advantage of Iν-TWSVM over ν-TWSVM is that Iν-TWSVM solves one smaller-sized Quadratic Programming Problem (QPP) and one Unconstrained Minimization Problem (UMP); as compared to solving two related QPPs in ν-TWSVM. Further, Iν-TWSVM (Fast) avoids solving a smaller sized QPP and transforms it as a unimodal function, which can be solved using line search methods and similar to Iν-TWSVM, the other problem is solved as a UMP. Due to their novel formulation, the proposed classifiers are faster than ν-TWSVM and have comparable generalization ability. Iν-TWSVM also implements structural risk minimization (SRM) principle by introducing a regularization term, along with minimizing the empirical risk. The other properties of Iν-TWSVM, related to support vectors (SVs), are similar to that of ν-TWSVM. To test the efficacy of the proposed method, experiments have been conducted on a wide range of UCI and a skewed variation of NDC datasets. We have also given the application of Iν-TWSVM as a binary classifier for pixel classification of color images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Isometric deformations of unstretchable material surfaces, a spatial variational treatment

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chao; Fosdick, Roger; Fried, Eliot

    2018-07-01

    The stored energy of an unstretchable material surface is assumed to depend only upon the curvature tensor. By control of its edge(s), the surface is deformed isometrically from its planar undistorted reference configuration into an equilibrium shape. That shape is to be determined from a suitably constrained variational problem as a state of relative minimal potential energy. We pose the variational problem as one of relative minimum potential energy in a spatial form, wherein the deformation of a flat, undistorted region D in E2 to its distorted form S in E3 is assumed specified. We then apply the principle that the first variation of the potential energy, expressed as a functional over S ∪ ∂S , must vanish for all admissible variations that correspond to isometric deformations from the distorted configuration S and that also contain the essence of flatness that characterizes the reference configuration D , but is not covered by the single statement that the variation of S correspond to an isometric deformation. We emphasize the commonly overlooked condition that the spatial expression of the variational problem requires an additional variational constraint of zero Gaussian curvature to ensure that variations from S that are isometric deformations also contain the notion of flatness. In this context, it is particularly revealing to observe that the two constraints produce distinct, but essential and complementary, conditions on the first variation of S. The resulting first variation integral condition, together with the constraints, may be applied, for example, to the case of a flat, undistorted, rectangular strip D that is deformed isometrically into a closed ring S by connecting its short edges and specifying that its long edges are free of loading and, therefore, subject to zero traction and couple traction. The elementary example of a closed ring without twist as a state of relative minimum potential energy is discussed in detail, and the bending of the strip by opposing specific bending moments on its short edges is treated as a particular case. Finally, the constrained variational problem, with the introduction of appropriate constraint reactions as Lagrangian multipliers to account for the requirements that the deformation from D to S is isometric and that D is flat, is formulated in the spatial form, and the associated Euler-Lagrange equations are derived. We then solve the Euler-Lagrange equations for two representative problems in which a planar undistorted rectangular material strip is isometrically deformed by applied edge tractions and couple tractions (i.e., specific edge moments) into (i) a bent and twisted circular cylindrical helical state, and (ii) a state conformal with the surface of a right circular conical form.

  11. Rate-independent dissipation in phase-field modelling of displacive transformations

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2018-05-01

    In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.

  12. On the application of copula in modeling maintenance contract

    NASA Astrophysics Data System (ADS)

    Iskandar, B. P.; Husniah, H.

    2016-02-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation.

  13. Soil-Transmitted Helminthiasis and Vitamin A Deficiency: Two Problems, One Policy.

    PubMed

    Strunz, Eric C; Suchdev, Parminder S; Addiss, David G

    2016-01-01

    Vitamin A deficiency (VAD) and soil-transmitted helminthiasis (STH) represent two widely prevalent and often overlapping global health problems. Approximately 75% of countries with moderate or severe VAD are coendemic for STH. We reviewed the literature on the complex relationship between STH and VAD. Treatment for STH significantly increases provitamin A (e.g., β-carotene) levels but is associated with minimal increases in preformed vitamin A (retinol). Interpretation of the data is complicated by variations in STH infection intensity and limitations of vitamin A biomarkers. Despite these challenges, increased coordination of STH and VAD interventions represents an important public health opportunity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Hamiltonian stability for weighted measure and generalized Lagrangian mean curvature flow

    NASA Astrophysics Data System (ADS)

    Kajigaya, Toru; Kunikawa, Keita

    2018-06-01

    In this paper, we generalize several results for the Hamiltonian stability and the mean curvature flow of Lagrangian submanifolds in a Kähler-Einstein manifold to more general Kähler manifolds including a Fano manifold equipped with a Kähler form ω ∈ 2 πc1(M) by using the method proposed by Behrndt (2011). Namely, we first consider a weighted measure on a Lagrangian submanifold L in a Kähler manifold M and investigate the variational problem of L for the weighted volume functional. We call a stationary point of the weighted volume functional f-minimal, and define the notion of Hamiltonian f-stability as a local minimizer under Hamiltonian deformations. We show such examples naturally appear in a toric Fano manifold. Moreover, we consider the generalized Lagrangian mean curvature flow in a Fano manifold which is introduced by Behrndt and Smoczyk-Wang. We generalize the result of H. Li, and show that if the initial Lagrangian submanifold is a small Hamiltonian deformation of an f-minimal and Hamiltonian f-stable Lagrangian submanifold, then the generalized MCF converges exponentially fast to an f-minimal Lagrangian submanifold.

  15. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGES

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  16. Equilibrium shapes of a heterogeneous bubble in an electric field: a variational formulation and numerical verifications

    NASA Astrophysics Data System (ADS)

    Wang, Hanxiong; Liu, Liping; Liu, Dong

    2017-03-01

    The equilibrium shape of a bubble/droplet in an electric field is important for electrowetting over dielectrics (EWOD), electrohydrodynamic (EHD) enhancement for heat transfer and electro-deformation of a single biological cell among others. In this work, we develop a general variational formulation in account of electro-mechanical couplings. In the context of EHD, we identify the free energy functional and the associated energy minimization problem that determines the equilibrium shape of a bubble in an electric field. Based on this variational formulation, we implement a fixed mesh level-set gradient method for computing the equilibrium shapes. This numerical scheme is efficient and validated by comparing with analytical solutions at the absence of electric field and experimental results at the presence of electric field. We also present simulation results for zero gravity which will be useful for space applications. The variational formulation and numerical scheme are anticipated to have broad applications in areas of EWOD, EHD and electro-deformation in biomechanics.

  17. The optimization of self-phased arrays for diurnal motion tracking of synchronous satellites

    NASA Technical Reports Server (NTRS)

    Theobold, D. M.; Hodge, D. B.

    1977-01-01

    The diurnal motion of a synchronous satellite necessitates mechanical tracking when a large aperture, high gain antenna is employed at the earth terminal. An alternative solution to this tracking problem is to use a self phased array consisting of a number of fixed pointed elements, each with moderate directivity. Non-mechanical tracking and adequate directive gain are achieved electronically by phase coherent summing of the element outputs. The element beamwidths provide overlapping area coverage of the satellite motion but introduce a diurnal variation into the array gain. The optimum element beamwidth and pointing direction of these elements can be obtained under the condition that the array gain is maximized simultaneously with the minimization of the diurnal variation.

  18. Discontinuous gradient differential equations and trajectories in the calculus of variations

    NASA Astrophysics Data System (ADS)

    Bogaevskii, I. A.

    2006-12-01

    The concept of gradient of smooth functions is generalized for their sums with concave functions. An existence, uniqueness, and continuous dependence theorem for increasing time is formulated and proved for solutions of an ordinary differential equation the right-hand side of which is the gradient of the sum of a concave and a smooth function. With the use of this result a physically natural motion of particles, well defined even at discontinuities of the velocity field, is constructed in the variational problem of the minimal mechanical action in a space of arbitrary dimension. For such a motion of particles in the plane all typical cases of the birth and the interaction of point clusters of positive mass are described.

  19. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  20. Constrained variation in Jastrow method at high density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, J.C.; Bishop, R.F.; Irvine, J.M.

    1976-11-01

    A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less

  1. Evaluation of body-wise and organ-wise registrations for abdominal organs

    NASA Astrophysics Data System (ADS)

    Xu, Zhoubing; Panjwani, Sahil A.; Lee, Christopher P.; Burke, Ryan P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2016-03-01

    Identifying cross-sectional and longitudinal correspondence in the abdomen on computed tomography (CT) scans is necessary for quantitatively tracking change and understanding population characteristics, yet abdominal image registration is a challenging problem. The key difficulty in solving this problem is huge variations in organ dimensions and shapes across subjects. The current standard registration method uses the global or body-wise registration technique, which is based on the global topology for alignment. This method (although producing decent results) has substantial influence of outliers, thus leaving room for significant improvement. Here, we study a new image registration approach using local (organ-wise registration) by first creating organ-specific bounding boxes and then using these regions of interest (ROIs) for aligning references to target. Based on Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD) and Hausdorff Distance (HD), the organ-wise approach is demonstrated to have significantly better results by minimizing the distorting effects of organ variations. This paper compares exclusively the two registration methods by providing novel quantitative and qualitative comparison data and is a subset of the more comprehensive problem of improving the multi-atlas segmentation by using organ normalization.

  2. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  3. Minimal complexity control law synthesis

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.

    1989-01-01

    A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.

  4. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  5. Improving the performance of minimizers and winnowing schemes.

    PubMed

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  7. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities

    NASA Astrophysics Data System (ADS)

    Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro

    2017-09-01

    An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.

  8. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  9. On well-posedness of variational models of charged drops.

    PubMed

    Muratov, Cyrill B; Novaga, Matteo

    2016-03-01

    Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges.

  10. On well-posedness of variational models of charged drops

    PubMed Central

    Novaga, Matteo

    2016-01-01

    Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges. PMID:27118921

  11. A stable computation of log-derivatives from noisy drawdown data

    NASA Astrophysics Data System (ADS)

    Ramos, Gustavo; Carrera, Jesus; Gómez, Susana; Minutti, Carlos; Camacho, Rodolfo

    2017-09-01

    Pumping tests interpretation is an art that involves dealing with noise coming from multiple sources and conceptual model uncertainty. Interpretation is greatly helped by diagnostic plots, which include drawdown data and their derivative with respect to log-time, called log-derivative. Log-derivatives are especially useful to complement geological understanding in helping to identify the underlying model of fluid flow because they are sensitive to subtle variations in the response to pumping of aquifers and oil reservoirs. The main problem with their use lies in the calculation of the log-derivatives themselves, which may display fluctuations when data are noisy. To overcome this difficulty, we propose a variational regularization approach based on the minimization of a functional consisting of two terms: one ensuring that the computed log-derivatives honor measurements and one that penalizes fluctuations. The minimization leads to a diffusion-like differential equation in the log-derivatives, and boundary conditions that are appropriate for well hydraulics (i.e., radial flow, wellbore storage, fractal behavior, etc.). We have solved this equation by finite differences. We tested the methodology on two synthetic examples showing that a robust solution is obtained. We also report the resulting log-derivative for a real case.

  12. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    PubMed Central

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  13. Minimizing EIT image artefacts from mesh variability in finite element models.

    PubMed

    Adler, Andy; Lionheart, William R B

    2011-07-01

    Electrical impedance tomography (EIT) solves an inverse problem to estimate the conductivity distribution within a body from electrical simulation and measurements at the body surface, where the inverse problem is based on a solution of Laplace's equation in the body. Most commonly, a finite element model (FEM) is used, largely because of its ability to describe irregular body shapes. In this paper, we show that simulated variations in the positions of internal nodes within a FEM can result in serious image artefacts in the reconstructed images. Such variations occur when designing FEM meshes to conform to conductivity targets, but the effects may also be seen in other applications of absolute and difference EIT. We explore the hypothesis that these artefacts result from changes in the projection of the anisotropic conductivity tensor onto the FEM system matrix, which introduces anisotropic components into the simulated voltages, which cannot be reconstructed onto an isotropic image, and appear as artefacts. The magnitude of the anisotropic effect is analysed for a small regular FEM, and shown to be proportional to the relative node movement as a fraction of element size. In order to address this problem, we show that it is possible to incorporate a FEM node movement component into the formulation of the inverse problem. These results suggest that it is important to consider artefacts due to FEM mesh geometry in EIT image reconstruction.

  14. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  15. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  16. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  17. Submarine harbor navigation using image data

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kramer, Kathleen A.

    2017-01-01

    The process of ingress and egress of a United States Navy submarine is a human-intensive process that takes numerous individuals to monitor locations and for hazards. Sailors pass vocal information to bridge where it is processed manually. There is interest in using video imaging of the periscope view to more automatically provide navigation within harbors and other points of ingress and egress. In this paper, video-based navigation is examined as a target-tracking problem. While some image-processing methods claim to provide range information, the moving platform problem and weather concerns, such as fog, reduce the effectiveness of these range estimates. The video-navigation problem then becomes an angle-only tracking problem. Angle-only tracking is known to be fraught with difficulties, due to the fact that the unobservable space is not the null space. When using a Kalman filter estimator to perform the tracking, significant errors arise which could endanger the submarine. This work analyzes the performance of the Kalman filter when angle-only measurements are used to provide the target tracks. This paper addresses estimation unobservability and the minimal set of requirements that are needed to address it in this complex but real-world problem. Three major issues are addressed: the knowledge of navigation beacons/landmarks' locations, the minimal number of these beacons needed to maintain the course, and update rates of the angles of the landmarks as the periscope rotates and landmarks become obscured due to blockage and weather. The goal is to address the problem of navigation to and from the docks, while maintaining the traversing of the harbor channel based on maritime rules relying solely on the image-based data. The minimal number of beacons will be considered. For this effort, the image correlation from frame to frame is assumed to be achieved perfectly. Variation in the update rates and the dropping of data due to rotation and obscuration is considered. The analysis will be based on a simple straight-line channel harbor entry to the dock, similar to a submarine entering the submarine port in San Diego.

  18. Putting the Power of Configuration in the Hands of the Users

    NASA Technical Reports Server (NTRS)

    Al-Shihabi, Mary-Jo; Brown, Mark; Rigolini, Marianne

    2011-01-01

    Goal was to reduce the overall cost of human space flight while maintaining the most demanding standards for safety and mission success. In support of this goal, a project team was chartered to replace 18 legacy Space Shuttle nonconformance processes and systems with one fully integrated system Problem Reporting and Corrective Action (PRACA) processes provide a closed-loop system for the identification, disposition, resolution, closure, and reporting of all Space Shuttle hardware/software problems PRACA processes are integrated throughout the Space Shuttle organizational processes and are critical to assuring a safe and successful program Primary Project Objectives Develop a fully integrated system that provides an automated workflow with electronic signatures Support multiple NASA programs and contracts with a single "system" architecture Define standard processes, implement best practices, and minimize process variations

  19. Vessel network detection using contour evolution and color components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Medeiros, Fatima; Cuadros, Jorge

    2011-06-22

    Automated retinal screening relies on vasculature segmentation before the identification of other anatomical structures of the retina. Vasculature extraction can also be input to image quality ranking, neovascularization detection and image registration, among other applications. There is an extensive literature related to this problem, often excluding the inherent heterogeneity of ophthalmic clinical images. The contribution of this paper relies on an algorithm using front propagation to segment the vessel network. The algorithm includes a penalty in the wait queue on the fast marching heap to minimize leakage of the evolving interface. The method requires no manual labeling, a minimum numbermore » of parameters and it is capable of segmenting color ocular fundus images in real scenarios, where multi-ethnicity and brightness variations are parts of the problem.« less

  20. Large-scale evidence of dependency length minimization in 37 languages

    PubMed Central

    Futrell, Richard; Mahowald, Kyle; Gibson, Edward

    2015-01-01

    Explaining the variation between human languages and the constraints on that variation is a core goal of linguistics. In the last 20 y, it has been claimed that many striking universals of cross-linguistic variation follow from a hypothetical principle that dependency length—the distance between syntactically related words in a sentence—is minimized. Various models of human sentence production and comprehension predict that long dependencies are difficult or inefficient to process; minimizing dependency length thus enables effective communication without incurring processing difficulty. However, despite widespread application of this idea in theoretical, empirical, and practical work, there is not yet large-scale evidence that dependency length is actually minimized in real utterances across many languages; previous work has focused either on a small number of languages or on limited kinds of data about each language. Here, using parsed corpora of 37 diverse languages, we show that overall dependency lengths for all languages are shorter than conservative random baselines. The results strongly suggest that dependency length minimization is a universal quantitative property of human languages and support explanations of linguistic variation in terms of general properties of human information processing. PMID:26240370

  1. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  2. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  3. A dictionary learning approach for Poisson image deblurring.

    PubMed

    Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

    2013-07-01

    The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

  4. On global optimization using an estimate of Lipschitz constant and simplicial partition

    NASA Astrophysics Data System (ADS)

    Gimbutas, Albertas; Žilinskas, Antanas

    2016-10-01

    A new algorithm is proposed for finding the global minimum of a multi-variate black-box Lipschitz function with an unknown Lipschitz constant. The feasible region is initially partitioned into simplices; in the subsequent iteration, the most suitable simplices are selected and bisected via the middle point of the longest edge. The suitability of a simplex for bisection is evaluated by minimizing of a surrogate function which mimics the lower bound for the considered objective function over that simplex. The surrogate function is defined using an estimate of the Lipschitz constant and the objective function values at the vertices of a simplex. The novelty of the algorithm is the sophisticated method of estimating the Lipschitz constant, and the appropriate method to minimize the surrogate function. The proposed algorithm was tested using 600 random test problems of different complexity, showing competitive results with two popular advanced algorithms which are based on similar assumptions.

  5. Equivalency principle for magnetoelectroelastic multiferroics with arbitrary microstructure: The phase field approach

    NASA Astrophysics Data System (ADS)

    Ni, Yong; He, Linghui; Khachaturyan, Armen G.

    2010-07-01

    A phase field method is proposed to determine the equilibrium fields of a magnetoelectroelastic multiferroic with arbitrarily distributed constitutive constants under applied loadings. This method is based on a developed generalized Eshelby's equivalency principle, in which the elastic strain, electrostatic, and magnetostatic fields at the equilibrium in the original heterogeneous system are exactly the same as those in an equivalent homogeneous magnetoelectroelastic coupled or uncoupled system with properly chosen distributed effective eigenstrain, polarization, and magnetization fields. Finding these effective fields fully solves the equilibrium elasticity, electrostatics, and magnetostatics in the original heterogeneous multiferroic. The paper formulates a variational principle proving that the effective fields are minimizers of appropriate close-form energy functional. The proposed phase field approach produces the energy minimizing effective fields (and thus solving the general multiferroic problem) as a result of artificial relaxation process described by the Ginzburg-Landau-Khalatnikov kinetic equations.

  6. Minimization of vibration in elastic beams with time-variant boundary conditions

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, Mingjun

    1992-01-01

    This paper presents an innovative method for minimizing the vibration of structures with time-variant boundary conditions (supports). The elastic body is modeled in two ways: (1) the first model is a letter seven type beam with a movable mass not to exceed the lower tip; (2) the second model has an arm that is a hollow beam with an inside mass with adjustable position. The complete solutions to both problems are carried out where the body is undergoing large rotation. The quasi-static procedure is used for the time-variant boundary conditions. The method developed employs partial differential equations governing the motion of the beam, including the effects of rigid-body motion, time-variant boundary conditions, and calculus of variations. The analytical solution is developed using Laplace and Fourier transforms. Examples of elastic robotic arms are given to illustrate the effectiveness of the methods developed.

  7. Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.

    PubMed

    Chang, Joshua; Paydarfar, David

    2014-12-01

    Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.

  8. Reprint of Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-04-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  9. Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method

    NASA Astrophysics Data System (ADS)

    D'Ambra, Pasqua; Tartaglione, Gaetano

    2015-03-01

    Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.

  10. Morphological study of the prostate gland in viscacha (Lagostomus maximus maximus) during periods of maximal and minimal reproductive activity.

    PubMed

    Chaves, Maximiliano; Aguilera-Merlo, Claudia; Cruceño, Albana; Fogal, Teresa; Mohamed, Fabian

    2015-11-01

    The viscacha (Lagostomus maximus maximus) is a rodent with photoperiod-dependent seasonal reproduction. The aim of this work was to study the morphological variations of the prostate during periods of maximal (summer, long photoperiod) and minimal (winter, short photoperiod) reproductive activity. Prostates of adult male viscachas were studied by light and electron microscopy, immunohistochemistry for androgen receptor, and morphometric analysis. The prostate consisted of two regions: peripheral and central. The peripheral zone exhibited large adenomeres with a small number of folds and lined with a pseudostratified epithelium. The central zone had small adenomeres with pseudostratified epithelium and the mucosa showed numerous folds. The morphology of both zones showed variations during periods of maximal and minimal reproductive activity. The prostate weight, prostate-somatic index, luminal diameter of adenomeres, epithelial height and major nuclear diameter decreased during the period of minimal reproductive activity. Principal cells showed variations in their shape, size and ultrastructural characteristics during the period of minimal reproductive activity in comparison with the active period. The androgen receptor expression in epithelial and fibromuscular stromal cells was different between the studied periods. Our results suggest a reduced secretory activity of viscacha prostate during the period of minimal reproductive activity. Thus, the morphological variations observed in both the central and peripheral zones of the viscacha prostate agree with the results previously obtained in the gonads of this rodent of photoperiod-dependent reproduction. Additionally, the variations observed in the androgen receptors suggest a direct effect of the circulating testosterone on the gland. © 2015 Wiley Periodicals, Inc.

  11. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  12. Optimal solutions for a bio mathematical model for the evolution of smoking habit

    NASA Astrophysics Data System (ADS)

    Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef

    In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.

  13. The inverse problem of refraction travel times, part I: Types of Geophysical Nonuniqueness through Minimization

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.; Park, C.B.

    2005-01-01

    In a set of two papers we study the inverse problem of refraction travel times. The purpose of this work is to use the study as a basis for development of more sophisticated methods for finding more reliable solutions to the inverse problem of refraction travel times, which is known to be nonunique. The first paper, "Types of Geophysical Nonuniqueness through Minimization," emphasizes the existence of different forms of nonuniqueness in the realm of inverse geophysical problems. Each type of nonuniqueness requires a different type and amount of a priori information to acquire a reliable solution. Based on such coupling, a nonuniqueness classification is designed. Therefore, since most inverse geophysical problems are nonunique, each inverse problem must be studied to define what type of nonuniqueness it belongs to and thus determine what type of a priori information is necessary to find a realistic solution. The second paper, "Quantifying Refraction Nonuniqueness Using a Three-layer Model," serves as an example of such an approach. However, its main purpose is to provide a better understanding of the inverse refraction problem by studying the type of nonuniqueness it possesses. An approach for obtaining a realistic solution to the inverse refraction problem is planned to be offered in a third paper that is in preparation. The main goal of this paper is to redefine the existing generalized notion of nonuniqueness and a priori information by offering a classified, discriminate structure. Nonuniqueness is often encountered when trying to solve inverse problems. However, possible nonuniqueness diversity is typically neglected and nonuniqueness is regarded as a whole, as an unpleasant "black box" and is approached in the same manner by applying smoothing constraints, damping constraints with respect to the solution increment and, rarely, damping constraints with respect to some sparse reference information about the true parameters. In practice, when solving geophysical problems different types of nonuniqueness exist, and thus there are different ways to solve the problems. Nonuniqueness is usually regarded as due to data error, assuming the true geology is acceptably approximated by simple mathematical models. Compounding the nonlinear problems, geophysical applications routinely exhibit exact-data nonuniqueness even for models with very few parameters adding to the nonuniqueness due to data error. While nonuniqueness variations have been defined earlier, they have not been linked to specific use of a priori information necessary to resolve each case. Four types of nonuniqueness, typical for minimization problems are defined with the corresponding methods for inclusion of a priori information to find a realistic solution without resorting to a non-discriminative approach. The above-developed stand-alone classification is expected to be helpful when solving any geophysical inverse problems. ?? Birkha??user Verlag, Basel, 2005.

  14. A perverse quality incentive in surgery: implications of reimbursing surgeons less for doing laparoscopic surgery.

    PubMed

    Fader, Amanda N; Xu, Tim; Dunkin, Brian J; Makary, Martin A

    2016-11-01

    Surgery is one of the highest priced services in health care, and complications from surgery can be serious and costly. Recently, advances in surgical techniques have allowed surgeons to perform many common operations using minimally invasive methods that result in fewer complications. Despite this, the rates of open surgery remain high across multiple surgical disciplines. This is an expert commentary and review of the contemporary literature regarding minimally invasive surgery practices nationwide, the benefits of less invasive approaches, and how minimally invasive compared with open procedures are differentially reimbursed in the United States. We explore the incentive of the current surgeon reimbursement fee schedule and its potential implications. A surgeon's preference to perform minimally invasive compared with open surgery remains highly variable in the U.S., even after adjustment for patient comorbidities and surgical complexity. Nationwide administrative claims data across several surgical disciplines demonstrates that minimally invasive surgery utilization in place of open surgery is associated with reduced adverse events and cost savings. Reducing surgical complications by increasing adoption of minimally invasive operations has significant cost implications for health care. However, current U.S. payment structures may perversely incentivize open surgery and financially reward physicians who do not necessarily embrace newer or best minimally invasive surgery practices. Utilization of minimally invasive surgery varies considerably in the U.S., representing one of the greatest disparities in health care. Existing physician payment models must translate the growing body of research in surgical care into physician-level rewards for quality, including choice of operation. Promoting safe surgery should be an important component of a strong, value-based healthcare system. Resolving the potentially perverse incentives in paying for surgical approaches may help address disparities in surgical care, reduce the prevalent problem of variation, and help contain health care costs.

  15. Primer Vector Optimization: Survey of Theory, new Analysis and Applications

    NASA Astrophysics Data System (ADS)

    Guzman

    This paper presents a preliminary study in developing a set of optimization tools for orbit rendezvous, transfer and station keeping. This work is part of a large scale effort undergoing at NASA Goddard Space Flight Center and a.i. solutions, Inc. to build generic methods, which will enable missions with tight fuel budgets. Since no single optimization technique can solve efficiently all existing problems, a library of tools where the user could pick the method most suited for the particular mission is envisioned. The first trajectory optimization technique explored is Lawden's primer vector theory [Ref. 1]. Primer vector theory can be considered as a byproduct of applying Calculus of Variations (COV) techniques to the problem of minimizing the fuel usage of impulsive trajectories. For an n-impulse trajectory, it involves the solution of n-1 two-point boundary value problems. In this paper, we look at some of the different formulations of the primer vector (dependent on the frame employed and on the force model). Also, the applicability of primer vector theory is examined in effort to understand when and why the theory can fail. Specifically, since COV is based on "small variations", singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyzes that employ "small variations" [Refs. 2, 3]. For example, singularities in the (2-body problem) variational equations along elliptic arcs occur when [Ref. 4], 1) the difference between the initial and final times is a multiple of the reference orbit period, 2) the difference between the initial and final true anomalies are given by k, for k= 0, 1, 2, 3,..., note that this cover the 3) the time of flight is a minimum for the given difference in true anomaly. For the N-body problem, the situation is more complex and is still under investigation. Several examples, such as the initialization of an orbit (ascent trajectory) and rotation of the line of apsides, are utilized as test cases. Recommendations, future work, and the possible addition of other optimization techniques are also discussed. References: [1] Lawden D.F., Optimal Trajectories for Space Navigation, Butterworths, London, 1963. [2] Wilson, R.S., Howell, K.C., and, Lo, M, "Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits", AIAA/AAS Astrodynamics Specialist Conference, AAS 99-041, Girdwood, Alaska, August 16-19, 1999. [3] Goodson, T, "Monte-Carlo Maneuver Analysis for the Microwave Anisotropy Probe", AAS/AIAA Astrodynamics Specialist Conference, AAS 01-331, Quebec City, Canada, July 30 - August 2, 2001. [4] Stern, R.G., "Singularities in the Analytic Solution of the Linearized Variational Equations of Elliptical Motion", Report RE-8, May 1964, Experimental Astronomy Lab., Massachusetts Institute of Technology, Cambridge, Massachusetts.

  16. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  17. The inverse problems of wing panel manufacture processes

    NASA Astrophysics Data System (ADS)

    Oleinikov, A. I.; Bormotin, K. S.

    2013-12-01

    It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.

  18. WE-EF-207-10: Striped Ratio Grids: A New Concept for Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, S

    2015-06-15

    Purpose: To propose a new method for estimating scatter in x-ray imaging. We propose the “striped ratio grid,” an anti-scatter grid with alternating stripes of high scatter rejection (attained, for example, by high grid ratio) and low scatter rejection. To minimize artifacts, stripes are oriented parallel to the direction of the ramp filter. Signal discontinuities at the boundaries between stripes provide information on local scatter content, although these discontinuities are contaminated by variation in primary radiation. Methods: We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid, andmore » processed them together to mimic a striped ratio grid. Two phantoms were scanned with the emulated striped ratio grid and compared with a conventional anti-scatter grid and a fan-beam acquisition, which served as ground truth. A nonlinear image processing algorithm was developed to mitigate the problem of primary variation. Results: The emulated striped ratio grid reduced scatter more effectively than the conventional grid alone. Contrast is thereby improved in projection imaging. In CT imaging, cupping is markedly reduced. Artifacts introduced by the striped ratio grid appear to be minimal. Conclusion: Striped ratio grids could be a simple and effective evolution of conventional anti-scatter grids. Unlike several other approaches currently under investigation for scatter management, striped ratio grids require minimal computation, little new hardware (at least for systems which already use removable grids) and impose few assumptions on the nature of the object being scanned.« less

  19. Action-minimizing solutions of the one-dimensional N-body problem

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Zhang, Shiqing

    2018-05-01

    We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.

  20. Mascons, GRACE, and Time-variable Gravity

    NASA Technical Reports Server (NTRS)

    Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.

    2006-01-01

    The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.

  1. Efficient and robust computation of PDF features from diffusion MR signal.

    PubMed

    Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc

    2009-10-01

    We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Li; Jacobsen, Stein B., E-mail: astrozeng@gmail.com, E-mail: jacobsen@neodymium.harvard.edu

    In the past few years, the number of confirmed planets has grown above 2000. It is clear that they represent a diversity of structures not seen in our own solar system. In addition to very detailed interior modeling, it is valuable to have a simple analytical framework for describing planetary structures. The variational principle is a fundamental principle in physics, entailing that a physical system follows the trajectory, which minimizes its action. It is alternative to the differential equation formulation of a physical system. Applying the variational principle to the planetary interior can beautifully summarize the set of differential equationsmore » into one, which provides us some insight into the problem. From this principle, a universal mass–radius relation, an estimate of the error propagation from the equation of state to the mass–radius relation, and a form of the virial theorem applicable to planetary interiors are derived.« less

  3. Eccentricity and argument of perigee control for orbits with repeat ground tracks

    NASA Technical Reports Server (NTRS)

    Vincent, Mark A.

    1992-01-01

    In order to gain an understanding into the problem of eccentricity (e) and argument of perigee (omega) control for TOPEX/Poseidon, the two cases where the highest latitude crossing time and one of the equator crossings are held constant are investigated. Variations in e and omega cause a significant effect on the satellite's ground-track repeatability. Maintaining e and omega near their frozen values will minimize this variation. Analytical expressions are found to express this relationship while keeping an arbitrary point of the ground track fixed. The initial offset of the ground track from its nominal path determines the subsequent evolution of e and omega about their frozen values. This long-term behavior is numerically determined using an earth gravitational field including the first 17 zonal harmonics. The numerical results are plotted together with the analytical constraints to see if the later values of e and omega cause unacceptable deviation in the ground track.

  4. Optimal leveling of flow over one-dimensional topography by Marangoni stresses

    NASA Astrophysics Data System (ADS)

    Gramlich, C. M.; Homsy, G. M.; Kalliadasis, Serafim

    2001-11-01

    A thin viscous film flowing over a step down in topography exhibits a capillary ridge near the step, which may be undesirable in applications. This paper investigates optimal leveling of the ridge by means of a Marangoni stress such as might be produced by a localized heater creating temperature variations at the film surface. Lubrication theory results in a differential equation for the free surface, which can be solved numerically for any given topography and temperature profile. Leveling the ridge is then formulated as an optimization problem to minimize the maximum free-surface height by varying the heater strength, position, and width. Optimized heaters with 'top-hat' or parabolic temperature profiles replace the original ridge with two smaller ridges of equal size, achieving leveling of better than 50%. An optimized asymmetric n-step temperature distribution results in (n+1) ridges and reduces the variation in surface height by a factor of better than 1/(n+1).

  5. Regularization destriping of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  6. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  7. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  8. Near infrared spectroscopy based monitoring of extraction processes of raw material with the help of dynamic predictive modeling

    NASA Astrophysics Data System (ADS)

    Wang, Haixia; Suo, Tongchuan; Wu, Xiaolin; Zhang, Yue; Wang, Chunhua; Yu, Heshui; Li, Zheng

    2018-03-01

    The control of batch-to-batch quality variations remains a challenging task for pharmaceutical industries, e.g., traditional Chinese medicine (TCM) manufacturing. One difficult problem is to produce pharmaceutical products with consistent quality from raw material of large quality variations. In this paper, an integrated methodology combining the near infrared spectroscopy (NIRS) and dynamic predictive modeling is developed for the monitoring and control of the batch extraction process of licorice. With the spectra data in hand, the initial state of the process is firstly estimated with a state-space model to construct a process monitoring strategy for the early detection of variations induced by the initial process inputs such as raw materials. Secondly, the quality property of the end product is predicted at the mid-course during the extraction process with a partial least squares (PLS) model. The batch-end-time (BET) is then adjusted accordingly to minimize the quality variations. In conclusion, our study shows that with the help of the dynamic predictive modeling, NIRS can offer the past and future information of the process, which enables more accurate monitoring and control of process performance and product quality.

  9. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  10. Ground-state densities from the Rayleigh-Ritz variation principle and from density-functional theory.

    PubMed

    Kvaal, Simen; Helgaker, Trygve

    2015-11-14

    The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.

  11. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  12. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  13. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  14. Multi-Objective Bidding Strategy for Genco Using Non-Dominated Sorting Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Saksinchai, Apinat; Boonchuay, Chanwit; Ongsakul, Weerakorn

    2010-06-01

    This paper proposes a multi-objective bidding strategy for a generation company (GenCo) in uniform price spot market using non-dominated sorting particle swarm optimization (NSPSO). Instead of using a tradeoff technique, NSPSO is introduced to solve the multi-objective strategic bidding problem considering expected profit maximization and risk (profit variation) minimization. Monte Carlo simulation is employed to simulate rivals' bidding behavior. Test results indicate that the proposed approach can provide the efficient non-dominated solution front effectively. In addition, it can be used as a decision making tool for a GenCo compromising between expected profit and price risk in spot market.

  15. Scientific data interpolation with low dimensional manifold model

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  16. Thermodynamic stability of biomolecules and evolution.

    PubMed

    Chakravarty, Ashim K

    2017-08-01

    The thermodynamic stability of biomolecules in the perspective of evolution is a complex issue and needs discussion. Intra molecular bonds maintain the structure and the state of internal energy (E) of a biomolecule at "local minima". In this communication, possibility of loss in internal energy level of a biomolecule through the changes in the bonds has been discussed, that might earn more thermodynamic stability for the molecule. In the process variations in structure and functions of the molecule could occur. Thus, E of a biomolecule is likely to have energy stature for minimization. Such change in energy status is an intrinsic factor for evolving biomolecules buying more stability and generating variations in the structure and function of DNA molecules undergoing natural selection. Thus, the variations might very well contribute towards the process of evolution. A brief discussion on conserved sequence in the light of proposition in this communication has been made at the end. Extension of the idea may resolve certain standing problems in evolution, such as maintenance of conserved sequences in genome of diverse species, pre- versus post adaptive mutations, 'orthogenesis', etc. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Optimization Methods in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.

    2009-09-01

    Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleinikov, A. I., E-mail: a.i.oleinikov@mail.ru; Bormotin, K. S., E-mail: cvmi@knastu.ru

    It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendousmore » challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.« less

  19. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A minimal dissipation type-based classification in irreversible thermodynamics and microeconomics

    NASA Astrophysics Data System (ADS)

    Tsirlin, A. M.; Kazakov, V.; Kolinko, N. A.

    2003-10-01

    We formulate the problem of finding classes of kinetic dependencies in irreversible thermodynamic and microeconomic systems for which minimal dissipation processes belong to the same type. We show that this problem is an inverse optimal control problem and solve it. The commonality of this problem in irreversible thermodynamics and microeconomics is emphasized.

  1. Minimization In Digital Design As A Meta-Planning Problem

    NASA Astrophysics Data System (ADS)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  2. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  3. On the formulation of a minimal uncertainty model for robust control with structured uncertainty

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1991-01-01

    In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.

  4. Mathematical solution of multilevel fractional programming problem with fuzzy goal programming approach

    NASA Astrophysics Data System (ADS)

    Lachhwani, Kailash; Poonia, Mahaveer Prasad

    2012-08-01

    In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.

  5. Application of the NASA airborne oceanographic lidar to the mapping of chlorophyll and other organic pigments

    NASA Technical Reports Server (NTRS)

    Hoge, F. E.; Swift, R. N.

    1981-01-01

    Laser fluorosensing techniques used for the airborne measurement of chlorophyll a and other naturally occurring waterborne pigments are reviewed. Previous experiments demonstrating the utility of the airborne oceanographic lidar (AOL) for assessment of various marine parameters are briefly discussed. The configuration of the AOL during the NOAA/NASA Superflux experiments is described. The participation of the AOL in these experiments is presented and the preliminary results are discussed. The importance of multispectral receiving capability in a laser fluorosensing system for providing reproducible measurements over wide areas having spatial variations in water column transmittance properties is addressed. This capability minimizes the number of truthing points required and is usable even in shallow estuarine areas where resuspension of bottom sediment is common. Finally, problems encountered on the Superflux missions and the resulting limitations on the AOL data sets are addressed and feasible solutions to these problems are provided.

  6. SPS pilot signal design and power transponder analysis, volume 2, phase 3

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.

    1980-01-01

    The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.

  7. Convergence of the Graph Allen-Cahn Scheme

    NASA Astrophysics Data System (ADS)

    Luo, Xiyang; Bertozzi, Andrea L.

    2017-05-01

    The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.

  8. Variability-aware double-patterning layout optimization for analog circuits

    NASA Astrophysics Data System (ADS)

    Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Lee, Zhao Chuan; Tseng, I.-Lun; Ong, Jonathan Yoong Seang

    2018-03-01

    The semiconductor industry has adopted multi-patterning techniques to manage the delay in the extreme ultraviolet lithography technology. During the design process of double-patterning lithography layout masks, two polygons are assigned to different masks if their spacing is less than the minimum printable spacing. With these additional design constraints, it is very difficult to find experienced layout-design engineers who have a good understanding of the circuit to manually optimize the mask layers in order to minimize color-induced circuit variations. In this work, we investigate the impact of double-patterning lithography on analog circuits and provide quantitative analysis for our designers to select the optimal mask to minimize the circuit's mismatch. To overcome the problem and improve the turn-around time, we proposed our smart "anchoring" placement technique to optimize mask decomposition for analog circuits. We have developed a software prototype that is capable of providing anchoring markers in the layout, allowing industry standard tools to perform automated color decomposition process.

  9. A Model of Regularization Parameter Determination in Low-Dose X-Ray CT Reconstruction Based on Dictionary Learning.

    PubMed

    Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui

    2015-01-01

    In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.

  10. The anatomy of choice: active inference and agency.

    PubMed

    Friston, Karl; Schwartenbeck, Philipp; Fitzgerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J

    2013-01-01

    This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.

  11. Brain abnormality segmentation based on l1-norm minimization

    NASA Astrophysics Data System (ADS)

    Zeng, Ke; Erus, Guray; Tanwar, Manoj; Davatzikos, Christos

    2014-03-01

    We present a method that uses sparse representations to model the inter-individual variability of healthy anatomy from a limited number of normal medical images. Abnormalities in MR images are then defined as deviations from the normal variation. More precisely, we model an abnormal (pathological) signal y as the superposition of a normal part ~y that can be sparsely represented under an example-based dictionary, and an abnormal part r. Motivated by a dense error correction scheme recently proposed for sparse signal recovery, we use l1- norm minimization to separate ~y and r. We extend the existing framework, which was mainly used on robust face recognition in a discriminative setting, to address challenges of brain image analysis, particularly the high dimensionality and low sample size problem. The dictionary is constructed from local image patches extracted from training images aligned using smooth transformations, together with minor perturbations of those patches. A multi-scale sliding-window scheme is applied to capture anatomical variations ranging from fine and localized to coarser and more global. The statistical significance of the abnormality term r is obtained by comparison to its empirical distribution through cross-validation, and is used to assign an abnormality score to each voxel. In our validation experiments the method is applied for segmenting abnormalities on 2-D slices of FLAIR images, and we obtain segmentation results consistent with the expert-defined masks.

  12. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  13. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  14. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  15. Blow-up behavior of ground states for a nonlinear Schrödinger system with attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Zeng, Xiaoyu; Zhou, Huan-Song

    2018-01-01

    We consider a nonlinear Schrödinger system arising in a two-component Bose-Einstein condensate (BEC) with attractive intraspecies interactions and repulsive interspecies interactions in R2. We get ground states of this system by solving a constrained minimization problem. For some kinds of trapping potentials, we prove that the minimization problem has a minimizer if and only if the attractive interaction strength ai (i = 1 , 2) of each component of the BEC system is strictly less than a threshold a*. Furthermore, as (a1 ,a2) ↗ (a* ,a*), the asymptotical behavior for the minimizers of the minimization problem is discussed. Our results show that each component of the BEC system concentrates at a global minimum of the associated trapping potential.

  16. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    NASA Astrophysics Data System (ADS)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.

  17. A methodology for formulating a minimal uncertainty model for robust control system design and analysis

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1989-01-01

    In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.

  18. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1989-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  19. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1987-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  20. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  1. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  2. Total variation approach for adaptive nonuniformity correction in focal-plane arrays.

    PubMed

    Vera, Esteban; Meza, Pablo; Torres, Sergio

    2011-01-15

    In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.

  3. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  4. A national and state profile of leading health problems and health care quality for US children: key insurance disparities and across-state variations.

    PubMed

    Bethell, Christina D; Kogan, Michael D; Strickland, Bonnie B; Schor, Edward L; Robertson, Julie; Newacheck, Paul W

    2011-01-01

    Parent/consumer-reported data is valuable and necessary for population-based assessment of many key child health and health care quality measures relevant to both the Children's Health Insurance Program Reauthorization Act (CHIPRA) of 2009 and the Patient Protection and Affordable Care Act of 2010 (ACA). The aim of this study was to evaluate national and state prevalence of health problems and special health care needs in US children; to estimate health care quality related to adequacy and consistency of insurance coverage, access to specialist, mental health and preventive medical and dental care, developmental screening, and whether children meet criteria for having a medical home, including care coordination and family centeredness; and to assess differences in health and health care quality for children by insurance type, special health care needs status, race/ethnicity, and/or state of residence. National and state level estimates were derived from the 2007 National Survey of Children's Health (N = 91,642; children aged 0-17 years). Variations between children with public versus private sector health insurance, special health care needs, specific conditions, race/ethnicity, and across states were evaluated using multivariate logistic regression and/or standardized statistical tests. An estimated 43% of US children (32 million) currently have at least 1 of 20 chronic health conditions assessed, increasing to 54.1% when overweight, obesity, or being at risk for developmental delays are included; 19.2% (14.2 million) have conditions resulting in a special health care need, a 1.6 point increase since 2003. Compared with privately insured children, the prevalence, complexity, and severity of health problems were systematically greater for the 29.1% of all children who are publicly insured children after adjusting for variations in demographic and socioeconomic factors. Forty-five percent of all children in the United States scored positively on a minimal quality composite measure: 1) adequate insurance, 2) preventive care visit, and 3) medical home. A 22.2 point difference existed across states and there were wide variations by health condition (autism, 22.8, to asthma, 39.4). After adjustment for demographic and health status differences, quality of care varied between children with public versus private health insurance on all but the following 3 measures: not receiving needed mental health services, care coordination, and performance on the minimal quality composite. A 4.60 fold (gaps in insurance) to 1.27 fold (preventive dental and medical care visits) difference in quality scores was observed across states. Notable disparities were observed among publicly insured children according to race/ethnicity and across all children by special needs status and household income. Findings emphasize the importance of health care insurance duration and adequacy, health care access, chronic condition management, and other quality of care goals reflected in the 2009 CHIPRA legislation and the ACA. Despite disparities, similarities for public and privately insured children speak to the pervasive nature of availability, coverage, and access issues for mental health services in the United States, as well as the system-wide problem of care coordination and accessing specialist care for all children. Variations across states in key areas amenable to state policy and program management support cross-state learning and improvement efforts. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. An Improved Variational Method for Hyperspectral Image Pansharpening with the Constraint of Spectral Difference Minimization

    NASA Astrophysics Data System (ADS)

    Huang, Z.; Chen, Q.; Shen, Y.; Chen, Q.; Liu, X.

    2017-09-01

    Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.

  6. Data assimilation and prognostic whole ice sheet modelling with the variationally derived, higher order, open source, and fully parallel ice sheet model VarGlaS

    NASA Astrophysics Data System (ADS)

    Brinkerhoff, D. J.; Johnson, J. V.

    2013-07-01

    We introduce a novel, higher order, finite element ice sheet model called VarGlaS (Variational Glacier Simulator), which is built on the finite element framework FEniCS. Contrary to standard procedure in ice sheet modelling, VarGlaS formulates ice sheet motion as the minimization of an energy functional, conferring advantages such as a consistent platform for making numerical approximations, a coherent relationship between motion and heat generation, and implicit boundary treatment. VarGlaS also solves the equations of enthalpy rather than temperature, avoiding the solution of a contact problem. Rather than include a lengthy model spin-up procedure, VarGlaS possesses an automated framework for model inversion. These capabilities are brought to bear on several benchmark problems in ice sheet modelling, as well as a 500 yr simulation of the Greenland ice sheet at high resolution. VarGlaS performs well in benchmarking experiments and, given a constant climate and a 100 yr relaxation period, predicts a mass evolution of the Greenland ice sheet that matches present-day observations of mass loss. VarGlaS predicts a thinning in the interior and thickening of the margins of the ice sheet.

  7. Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taguchi, Katsuyuki; Xu Jingyan; Srivastava, Somesh

    2011-03-15

    Purpose: To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., ''Tiny a priori knowledge solves the interior problem in computed tomography'', Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. Methods: The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, ''Compressed sensing basedmore » interior tomography'', Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., ''A general total variation minimization theorem for compressed sensing based interior tomography'', Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of {approx}500 mm. The projection data were truncated either moderately to limit the detector coverage to diameter 350 mm of the object or severely to cover diameter 199 mm. Images were reconstructed using the proposed method. Results: The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. Conclusions: The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant subregion.« less

  8. Scientific data interpolation with low dimensional manifold model

    DOE PAGES

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...

    2017-09-28

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  9. The behavior of bouncing disks and pizza tossing

    NASA Astrophysics Data System (ADS)

    Liu, K.-C.; Friend, J.; Yeo, L.

    2009-03-01

    We investigate the dynamics of a disk bouncing on a vibrating platform - a variation of the classic bouncing ball problem - that captures the physics of pizza tossing and the operation of certain standing-wave ultrasonic motors (SWUMs). The system's dynamics explains why certain tossing motions are used by dough-toss performers for different tricks: a helical trajectory is used in single tosses because it maximizes energy efficiency and the dough's airborne rotational speed, a semi-elliptical motion is used in multiple tosses because it is easier for maintaining dough rotation at the maximum rotational speed. The system's bifurcation diagram and basins of attraction also informs SWUM designers about the optimal design for high speed and minimal sensitivity to perturbation.

  10. Theoretical stability in coefficient inverse problems for general hyperbolic equations with numerical reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Liu, Yikan; Yamamoto, Masahiro

    2018-04-01

    In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.

  11. Statistical Interior Tomography

    PubMed Central

    Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.

    2011-01-01

    This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044

  12. Objective scatterometer wind ambiguity removal using smoothness and dynamical constraints

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.

    1984-01-01

    In the present investigation, a variational analysis method (VAM) is used to remove the ambiguity of the Seasat-A Satellite Scatterometer (SASS) winds. At each SASS data point, two, three, or four wind vectors (termed ambiguities) are retrieved. It is pointed out that the VAM is basically a least squares method for fitting data. The problem may be nonlinear. The best fit to the data and constraints is obtained on the basis of a minimization of the objective function. The VAM was tested and tuned at 12 h GMT Sept. 10, 1978. Attention is given to a case study involving an intense cyclone centered south of Japan at 138 deg E.

  13. Scientific data interpolation with low dimensional manifold model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  14. Automated Optimization of Potential Parameters

    PubMed Central

    Michele, Di Pierro; Ron, Elber

    2013-01-01

    An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115

  15. Research study on materials processing in space, experiment M512

    NASA Technical Reports Server (NTRS)

    Rubenstein, M.; Hopkins, R. H.; Kim, H. B.

    1973-01-01

    Gallium arsenide, a commercially valuable semiconductor, has been prepared from the melt (M.P. 1237C), by vapor growth, and by growth from metallic solutions. It has been established that growth from metallic solution can produce material with high, and perhaps with the highest possible, chemical homogeneity and crystalline perfection. Growth of GaAs from metallic solution can be performed at relatively low temperatures (about 600C) and is relatively insensitive to temperature fluctuations. However, this type of crystal growth is subject to the decided disadvantage that density induced convection currents may produce variations in rates of growth at a growing surface. This problem would be minimized under reduced gravity conditions.

  16. A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks

    NASA Astrophysics Data System (ADS)

    Gong, Jie; Zhou, Sheng; Niu, Zhisheng

    The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.

  17. Collective neurodynamic optimization for economic emission dispatch problem considering valve point effect in microgrid.

    PubMed

    Wang, Tiancai; He, Xing; Huang, Tingwen; Li, Chuandong; Zhang, Wei

    2017-09-01

    The economic emission dispatch (EED) problem aims to control generation cost and reduce the impact of waste gas on the environment. It has multiple constraints and nonconvex objectives. To solve it, the collective neurodynamic optimization (CNO) method, which combines heuristic approach and projection neural network (PNN), is attempted to optimize scheduling of an electrical microgrid with ten thermal generators and minimize the plus of generation and emission cost. As the objective function has non-derivative points considering valve point effect (VPE), differential inclusion approach is employed in the PNN model introduced to deal with them. Under certain conditions, the local optimality and convergence of the dynamic model for the optimization problem is analyzed. The capability of the algorithm is verified in a complicated situation, where transmission loss and prohibited operating zones are considered. In addition, the dynamic variation of load power at demand side is considered and the optimal scheduling of generators within 24 h is described. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Phase unwrapping with graph cuts optimization and dual decomposition acceleration for 3D high-resolution MRI data.

    PubMed

    Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi

    2017-03-01

    Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  20. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  1. Problem of quality assurance during metal constructions welding via robotic technological complexes

    NASA Astrophysics Data System (ADS)

    Fominykh, D. S.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The problem of minimizing the probability for critical combinations of events that lead to a loss in welding quality via robotic process automation is examined. The problem is formulated, models and algorithms for its solution are developed. The problem is solved by minimizing the criterion characterizing the losses caused by defective products. Solving the problem may enhance the quality and accuracy of operations performed and reduce the losses caused by defective product

  2. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  3. Quantum feedback cooling of a mechanical oscillator using variational measurements: tweaking Heisenberg’s microscope

    NASA Astrophysics Data System (ADS)

    Habibi, Hojat; Zeuthen, Emil; Ghanaatshoar, Majid; Hammerer, Klemens

    2016-08-01

    We revisit the problem of preparing a mechanical oscillator in the vicinity of its quantum-mechanical ground state by means of feedback cooling based on continuous optical detection of the oscillator position. In the parameter regime relevant to ground-state cooling, the optical back-action and imprecision noise set the bottleneck of achievable cooling and must be carefully balanced. This can be achieved by adapting the phase of the local oscillator in the homodyne detection realizing a so-called variational measurement. The trade-off between accurate position measurement and minimal disturbance can be understood in terms of Heisenberg’s microscope and becomes particularly relevant when the measurement and feedback processes happen to be fast within the quantum coherence time of the system to be cooled. This corresponds to the regime of large quantum cooperativity {C}{{q}}≳ 1, which was achieved in recent experiments on feedback cooling. Our method provides a simple path to further pushing the limits of current state-of-the-art experiments in quantum optomechanics.

  4. Inhibition of viscous fluid fingering: A variational scheme for optimal flow rates

    NASA Astrophysics Data System (ADS)

    Miranda, Jose; Dias, Eduardo; Alvarez-Lacalle, Enrique; Carvalho, Marcio

    2012-11-01

    Conventional viscous fingering flow in radial Hele-Shaw cells employs a constant injection rate, resulting in the emergence of branched interfacial shapes. The search for mechanisms to prevent the development of these bifurcated morphologies is relevant to a number of areas in science and technology. A challenging problem is how best to choose the pumping rate in order to restrain growth of interfacial amplitudes. We use an analytical variational scheme to look for the precise functional form of such an optimal flow rate. We find it increases linearly with time in a specific manner so that interface disturbances are minimized. Experiments and nonlinear numerical simulations support the effectiveness of this particularly simple, but not at all obvious, pattern controlling process. J.A.M., E.O.D. and M.S.C. thank CNPq/Brazil for financial support. E.A.L. acknowledges support from Secretaria de Estado de IDI Spain under project FIS2011-28820-C02-01.

  5. Wall jet analysis for circulation control aerodynamics. Part 1: Fundamental CFD and turbulence modeling concepts

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.

    1987-01-01

    An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.

  6. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  7. Temperature and pressure fiber-optic sensors applied to minimally invasive diagnostics and therapies

    NASA Astrophysics Data System (ADS)

    Hamel, Caroline; Pinet, Éric

    2006-02-01

    We present how fiber-optic temperature or pressure sensors could be applied to minimally invasive diagnostics and therapies. For instance a miniature pressure sensor based on micro-optical mechanical systems (MOMS) could solve most of the problems associated with fluidic pressure transduction presently used for triggering purposes. These include intra-aortic balloon pumping (IABP) therapy and other applications requiring detection of fast and/or subtle fluid pressure variations such as for intracranial pressure monitoring or for urology diagnostics. As well, miniature temperature sensors permit minimally invasive direct temperature measurement in diagnostics or therapies requiring energy transfer to living tissues. The extremely small size of fiber-optic sensors that we have developed allows quick and precise in situ measurements exactly where the physical parameters need to be known. Furthermore, their intrinsic immunity to electromagnetic interference (EMI) allows for the safe use of EMI-generating therapeutic or diagnostic equipments without compromising the signal quality. With the trend of ambulatory health care and the increasing EMI noise found in modern hospitals, the use of multi-parameter fiber-optic sensors will improve constant patient monitoring without any concern about the effects of EMI disturbances. The advantages of miniature fiberoptic sensors will offer clinicians new monitoring tools that open the way for improved diagnostic accuracy and new therapeutic technologies.

  8. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  9. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    NASA Astrophysics Data System (ADS)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  10. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  11. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  12. Providing intraosseous anesthesia with minimal invasion.

    PubMed

    Giffin, K M

    1994-08-01

    A new variation of intraosseous anesthesia--crestal anesthesia--that is rapid, site-specific and minimally invasive is presented. The technique uses alveolar crest nutrient canals for anesthetic delivery without penetrating either bone or periodontal ligament.

  13. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  14. The anatomy of choice: active inference and agency

    PubMed Central

    Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J.

    2013-01-01

    This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback–Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action—constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution—that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control. PMID:24093015

  15. "Variation Problems" and Their Roles in the Topic of Fraction Division in Chinese Mathematics Textbook Examples

    ERIC Educational Resources Information Center

    Sun, Xuhua

    2011-01-01

    This article deals with the roles of variation problems ("one problem multiple solution" and "one problem multiple changes") as used in Chinese textbooks. It is argued that variation problems as an "indigenous" Chinese practice aim to discern and to compare the invariant feature of the relationship among concepts and…

  16. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  17. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    NASA Astrophysics Data System (ADS)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see , articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  18. Joint denoising and distortion correction of atomic scale scanning transmission electron microscopy images

    NASA Astrophysics Data System (ADS)

    Berkels, Benjamin; Wirth, Benedikt

    2017-09-01

    Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.

  19. A Model of Regularization Parameter Determination in Low-Dose X-Ray CT Reconstruction Based on Dictionary Learning

    PubMed Central

    Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui

    2015-01-01

    In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024

  20. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    NASA Astrophysics Data System (ADS)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  1. Minimization of betatron oscillations of electron beam injected into a time-varying lattice via extremum seeking

    DOE PAGES

    Scheinker, Alexander; Huang, Xiaobiao; Wu, Juhao

    2017-02-20

    Here, we report on a beam-based experiment performed at the SPEAR3 storage ring of the Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory, in which a model-independent extremum-seeking optimization algorithm was utilized to minimize betatron oscillations in the presence of a time-varying kicker magnetic field, by automatically tuning the pulsewidth, voltage, and delay of two other kicker magnets, and the current of two skew quadrupole magnets, simultaneously, in order to optimize injection kick matching. Adaptive tuning was performed on eight parameters simultaneously. The scheme was able to continuously maintain the match of a five-magnet lattice while the fieldmore » strength of a kicker magnet was continuously varied at a rate much higher (±6% sinusoidal voltage change over 1.5 h) than typically experienced in operation. Lastly, the ability to quickly tune or compensate for time variation of coupled components, as demonstrated here, is very important for the more general, more difficult problem of global accelerator tuning to quickly switch between various experimental setups.« less

  2. A level set-based topology optimization method for simultaneous design of elastic structure and coupled acoustic cavity using a two-phase material model

    NASA Astrophysics Data System (ADS)

    Noguchi, Yuki; Yamamoto, Takashi; Yamada, Takayuki; Izui, Kazuhiro; Nishiwaki, Shinji

    2017-09-01

    This papers proposes a level set-based topology optimization method for the simultaneous design of acoustic and structural material distributions. In this study, we develop a two-phase material model that is a mixture of an elastic material and acoustic medium, to represent an elastic structure and an acoustic cavity by controlling a volume fraction parameter. In the proposed model, boundary conditions at the two-phase material boundaries are satisfied naturally, avoiding the need to express these boundaries explicitly. We formulate a topology optimization problem to minimize the sound pressure level using this two-phase material model and a level set-based method that obtains topologies free from grayscales. The topological derivative of the objective functional is approximately derived using a variational approach and the adjoint variable method and is utilized to update the level set function via a time evolutionary reaction-diffusion equation. Several numerical examples present optimal acoustic and structural topologies that minimize the sound pressure generated from a vibrating elastic structure.

  3. Regularized variational theories of fracture: A unified approach

    NASA Astrophysics Data System (ADS)

    Freddi, Francesco; Royer-Carfagni, Gianni

    2010-08-01

    The fracture pattern in stressed bodies is defined through the minimization of a two-field pseudo-spatial-dependent functional, with a structure similar to that proposed by Bourdin-Francfort-Marigo (2000) as a regularized approximation of a parent free-discontinuity problem, but now considered as an autonomous model per se. Here, this formulation is altered by combining it with structured deformation theory, to model that when the material microstructure is loosened and damaged, peculiar inelastic (structured) deformations may occur in the representative volume element at the price of surface energy consumption. This approach unifies various theories of failure because, by simply varying the form of the class for admissible structured deformations, different-in-type responses can be captured, incorporating the idea of cleavage, deviatoric, combined cleavage-deviatoric and masonry-like fractures. Remarkably, this latter formulation rigorously avoid material overlapping in the cracked zones. The model is numerically implemented using a standard finite-element discretization and adopts an alternate minimization algorithm, adding an inequality constraint to impose crack irreversibility ( fixed crack model). Numerical experiments for some paradigmatic examples are presented and compared for various possible versions of the model.

  4. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  5. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  6. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  7. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.

    PubMed

    Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M

    2012-03-01

    Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

  8. A variational data assimilation system for the range dependent acoustic model using the representer method: Theoretical derivations.

    PubMed

    Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent

    2017-07-01

    This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.

  9. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  10. On Exact Solutions of Rarefaction-Rarefaction Interactions in Compressible Isentropic Flow

    NASA Astrophysics Data System (ADS)

    Jenssen, Helge Kristian

    2017-12-01

    Consider the interaction of two centered rarefaction waves in one-dimensional, compressible gas flow with pressure function p(ρ )=a^2ρ ^γ with γ >1. The classic hodograph approach of Riemann provides linear 2nd order equations for the time and space variables t, x as functions of the Riemann invariants r, s within the interaction region. It is well known that t( r, s) can be given explicitly in terms of the hypergeometric function. We present a direct calculation (based on works by Darboux and Martin) of this formula, and show how the same approach provides an explicit formula for x( r, s) in terms of Appell functions (two-variable hypergeometric functions). Motivated by the issue of vacuum and total variation estimates for 1-d Euler flows, we then use the explicit t-solution to monitor the density field and its spatial variation in interactions of two centered rarefaction waves. It is found that the variation is always non-monotone, and that there is an overall increase in density variation if and only if γ >3. We show that infinite duration of the interaction is characterized by approach toward vacuum in the interaction region, and that this occurs if and only if the Riemann problem defined by the extreme initial states generates a vacuum. Finally, it is verified that the minimal density in such interactions decays at rate O(1)/ t.

  11. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  12. Germanium soup

    NASA Astrophysics Data System (ADS)

    Palmer, Troy A.; Alexay, Christopher C.

    2006-05-01

    This paper addresses the variety and impact of dispersive model variations for infrared materials and, in particular, the level to which certain optical designs are affected by this potential variation in germanium. This work offers a method for anticipating and/or minimizing the pitfalls such potential model variations may have on a candidate optical design.

  13. Intelligent Distribution Voltage Control with Distributed Generation =

    NASA Astrophysics Data System (ADS)

    Castro Mendieta, Jose

    In this thesis, three methods for the optimal participation of the reactive power of distributed generations (DGs) in unbalanced distributed network have been proposed, developed, and tested. These new methods were developed with the objectives of maintain voltage within permissible limits and reduce losses. The first method proposes an optimal participation of reactive power of all devices available in the network. The propose approach is validated by comparing the results with other methods reported in the literature. The proposed method was implemented using Simulink of Matlab and OpenDSS. Optimization techniques and the presentation of results are from Matlab. The co-simulation of Electric Power Research Institute's (EPRI) OpenDSS program solves a three-phase optimal power flow problem in the unbalanced IEEE 13 and 34-node test feeders. The results from this work showed a better loss reduction compared to the Coordinated Voltage Control (CVC) method. The second method aims to minimize the voltage variation on the pilot bus on distribution network using DGs. It uses Pareto and Fuzzy-PID logic to reduce the voltage variation. Results indicate that the proposed method reduces the voltage variation more than the other methods. Simulink of Matlab and OpenDSS is used in the development of the proposed approach. The performance of the method is evaluated on IEEE 13-node test feeder with one and three DGs. Variables and unbalanced loads are used, based on real consumption data, over a time window of 48 hours. The third method aims to minimize the reactive losses using DGs on distribution networks. This method analyzes the problem using the IEEE 13-node test feeder with three different loads and the IEEE 123-node test feeder with four DGs. The DGs can be fixed or variables. Results indicate that integration of DGs to optimize the reactive power of the network helps to maintain the voltage within the allowed limits and to reduce the reactive power losses. The thesis is presented in the form of the three articles. The first article is published in the journal Electrical Power and Energy System, the second is published in the international journal Energies and the third was submitted to the journal Electrical Power and Energy System. Two other articles have been published in conferences with reviewing committee. This work is based on six chapters, which are detailed in the various sections of the thesis.

  14. Influence of hospital-level practice patterns on variation in the application of minimally invasive surgery in United States pediatric patients.

    PubMed

    Train, Arianne T; Harmon, Carroll M; Rothstein, David H

    2017-10-01

    Although disparities in access to minimally invasive surgery are thought to exist in pediatric surgical patients in the United States, hospital-level practice patterns have not been evaluated as a possible contributing factor. Retrospective cohort study using the Kids' Inpatient Database, 2012. Odds ratios of undergoing a minimally invasive compared to open operation were calculated for six typical pediatric surgical operations after adjustment for multiple patient demographic and hospital-level variables. Further adjustment to the regression model was made by incorporating hospital practice patterns, defined as operation-specific minimally invasive frequency and volume. Age was the most significant patient demographic factor affecting application of minimally invasive surgery for all procedures. For several procedures, adjusting for individual hospital practice patterns removed race- and income-based disparities seen in performance of minimally invasive operations. Disparities related to insurance status were not affected by the same adjustment. Variation in the application of minimally invasive surgery in pediatric surgical patients is primarily influenced by patient age and the type of procedure performed. Perceived disparities in access related to some socioeconomic factors are decreased but not eliminated by accounting for individual hospital practice patterns, suggesting that complex underlying factors influence application of advanced surgical techniques. II. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Probabilistic models of genetic variation in structured populations applied to global human studies.

    PubMed

    Hao, Wei; Song, Minsun; Storey, John D

    2016-03-01

    Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important problem is how to formulate and estimate probabilistic models of observed genotypes that account for complex population structure. The most prominent work on this problem has focused on estimating a model of admixture proportions of ancestral populations for each individual. Here, we instead focus on modeling variation of the genotypes without requiring a higher-level admixture interpretation. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly admixture model as a special case. Noting some drawbacks of this approach, we introduce a new 'logistic factor analysis' framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the Human Genome Diversity Panel and 1000 Genomes Project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions. A Bioconductor R package called lfa is available at http://www.bioconductor.org/packages/release/bioc/html/lfa.html jstorey@princeton.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  16. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  17. Physically corrected forward operators for induced emission tomography: a simulation study

    NASA Astrophysics Data System (ADS)

    Viganò, Nicola Roberto; Solé, Vicente Armando

    2018-03-01

    X-ray emission tomography techniques over non-radioactive materials allow one to investigate different and important aspects of the matter that are usually not addressable with the standard x-ray transmission tomography, such as density, chemical composition and crystallographic information. However, the quantitative reconstruction of these investigated properties is hindered by additional problems, including the self-attenuation of the emitted radiation. Work has been done in the past, especially concerning x-ray fluorescence tomography, but this has always focused on solving very specific problems. The novelty of this work resides in addressing the problem of induced emission tomography from a much wider perspective, introducing a unified discrete representation that can be used to modify existing algorithms to reconstruct the data of the different types of experiments. The direct outcome is a clear and easy mathematical description of the implementation details of such algorithms, despite small differences in geometry and other practical aspects, but also the possibility to express the reconstruction as a minimization problem, allowing the use of variational methods, and a more flexible modeling of the noise involved in the detection process. In addition, we look at the results of a few selected simulated data reconstructions that describe the effect of physical corrections like the self-attenuation, and the response to noise of the adapted reconstruction algorithms.

  18. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  19. The use of ion beam cleaning to obtain high quality cold welds with minimal deformation

    NASA Technical Reports Server (NTRS)

    Sater, B. L.; Moore, T. J.

    1978-01-01

    A variation of cold welding is described which utilizes an ion beam to clean mating surfaces prior to joining in a vacuum environment. High quality solid state welds were produced with minimal deformation.

  20. Development of sinkholes resulting from man's activities in the Eastern United States

    USGS Publications Warehouse

    Newton, John G.

    1987-01-01

    Alternatives that allow avoiding or minimizing sinkhole hazards are most numerous when a problem or potential problem is recognized during site evaluation. The number of alternatives declines after the beginning of site development. Where sinkhole development is predictable, zoning of land use can minimize hazards.

  1. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei

    2016-08-15

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem,more » we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.« less

  2. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin

    2016-08-01

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress the illusory artifacts caused by the deficiency in valid data.

  3. Heat transfer enhancement in free convection flow of CNTs Maxwell nanofluids with four different types of molecular liquids.

    PubMed

    Aman, Sidra; Khan, Ilyas; Ismail, Zulkhibri; Salleh, Mohd Zuki; Al-Mdallal, Qasem M

    2017-05-26

    This article investigates heat transfer enhancement in free convection flow of Maxwell nanofluids with carbon nanotubes (CNTs) over a vertically static plate with constant wall temperature. Two kinds of CNTs i.e. single walls carbon nanotubes (SWCNTs) and multiple walls carbon nanotubes (MWCNTs) are suspended in four different types of base liquids (Kerosene oil, Engine oil, water and ethylene glycol). Kerosene oil-based nanofluids are given a special consideration due to their higher thermal conductivities, unique properties and applications. The problem is modelled in terms of PDE's with initial and boundary conditions. Some relevant non-dimensional variables are inserted in order to transmute the governing problem into dimensionless form. The resulting problem is solved via Laplace transform technique and exact solutions for velocity, shear stress and temperature are acquired. These solutions are significantly controlled by the variations of parameters including the relaxation time, Prandtl number, Grashof number and nanoparticles volume fraction. Velocity and temperature increases with elevation in Grashof number while Shear stress minimizes with increasing Maxwell parameter. A comparison between SWCNTs and MWCNTs in each case is made. Moreover, a graph showing the comparison amongst four different types of nanofluids for both CNTs is also plotted.

  4. Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response

    PubMed Central

    Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.

    2016-01-01

    Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420

  5. Formulation and Optimization of Robust Sensor Placement Problems for Drinking Water Contamination Warning Systems

    DOE PAGES

    Watson, Jean-Paul; Murray, Regan; Hart, William E.

    2009-11-13

    We report that the sensor placement problem in contamination warning system design for municipal water distribution networks involves maximizing the protection level afforded by limited numbers of sensors, typically quantified as the expected impact of a contamination event; the issue of how to mitigate against high-consequence events is either handled implicitly or ignored entirely. Consequently, expected-case sensor placements run the risk of failing to protect against high-consequence 9/11-style attacks. In contrast, robust sensor placements address this concern by focusing strictly on high-consequence events and placing sensors to minimize the impact of these events. We introduce several robust variations of themore » sensor placement problem, distinguished by how they quantify the potential damage due to high-consequence events. We explore the nature of robust versus expected-case sensor placements on three real-world large-scale distribution networks. We find that robust sensor placements can yield large reductions in the number and magnitude of high-consequence events, with only modest increases in expected impact. Finally, the ability to trade-off between robust and expected-case impacts is a key unexplored dimension in contamination warning system design.« less

  6. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Padula, Sharon L.

    1990-01-01

    Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  8. Does finite-temperature decoding deliver better optima for noisy Hamiltonians?

    NASA Astrophysics Data System (ADS)

    Ochoa, Andrew J.; Nishimura, Kohji; Nishimori, Hidetoshi; Katzgraber, Helmut G.

    The minimization of an Ising spin-glass Hamiltonian is an NP-hard problem. Because many problems across disciplines can be mapped onto this class of Hamiltonian, novel efficient computing techniques are highly sought after. The recent development of quantum annealing machines promises to minimize these difficult problems more efficiently. However, the inherent noise found in these analog devices makes the minimization procedure difficult. While the machine might be working correctly, it might be minimizing a different Hamiltonian due to the inherent noise. This means that, in general, the ground-state configuration that correctly minimizes a noisy Hamiltonian might not minimize the noise-less Hamiltonian. Inspired by rigorous results that the energy of the noise-less ground-state configuration is equal to the expectation value of the energy of the noisy Hamiltonian at the (nonzero) Nishimori temperature [J. Phys. Soc. Jpn., 62, 40132930 (1993)], we numerically study the decoding probability of the original noise-less ground state with noisy Hamiltonians in two space dimensions, as well as the D-Wave Inc. Chimera topology. Our results suggest that thermal fluctuations might be beneficial during the optimization process in analog quantum annealing machines.

  9. Variational Assimilation of Sparse and Uncertain Satellite Data For 1D Saint-Venant River Models

    NASA Astrophysics Data System (ADS)

    Garambois, P. A.; Brisset, P.; Monnier, J.; Roux, H.

    2016-12-01

    Profusion of satellites are providing increasingly accurate measurements of continental water cyle, and water bodies variations while in situ observability is declining. The future Surface Water and Ocean Topography (SWOT) mission will provide maps of river surface elevations widths and slopes with an almost global coverage and temporal revisits. This will offer the possibility to address a larger variety of inverse problems in surface hydrology. Data assimilation techniques, that are broadly used in several scientific fields, aim to optimally combine models, system observations and prior information. Variational assimilation consists in iterative minimization of a discrepency measure between model outputs and observations, here for retrieving boundary conditions and parameters of a 1D Saint Venant model. Nevertheless, inferring river discharge and hydraulic parameters thanks to the observation of river surface is not straightforward. This is particularly true in the case of sparse and uncertain observations of flow state variables since they are governed by nonlinear physical processes. This paper investigates the identifiability of hydraulic controls given sparse and uncertain satellite observations of a river. The identifiability of river discharge alone and with roughness is tested for several spatio temporal patterns of river observations, including SWOT like observations. A new 1D Shallow water model with variational data assimilation, within the DassFlow chain is presented as well as postprocessing and observation operator dedicated to the future SWOT and SWOT simulator data. In view to decrease inverse problem dimensionality discharge is represented in a reduced basis. Moreover we introduce an original and reduced parametrization of the flow resistance that can account for various flow regimes along with a cross section design dedicated to remote sensing. We show which discharge temporal frequencies can be identified w.r.t observation ones and at which accuracy. Eventually the important question of the discharge identifiability potential between observation times and depending on the spatio-temporal sampling is adressed with respect to the wave lengths of the hydrological signals.

  10. Susceptibility to keel bone fractures in laying hens and the role of genetic variation.

    PubMed

    Candelotto, Laura; Stratmann, Ariane; Gebhardt-Henrich, Sabine G; Rufener, Christina; van de Braak, Teun; Toscano, Michael J

    2017-10-01

    Keel bone fractures are a well-known welfare problem in modern commercial laying hen systems. The present study sought to identify genetic variation in relation to keel bone fracture susceptibility of 4 distinct crossbred and one pure line, and by extension, possible breeding traits. Susceptibility to fractures were assessed using an ex vivo impact testing protocol in combination with a study design that minimized environmental variation to focus on genetic differences. The 5 crossbred/pure lines differed in their susceptibility to keel bone fractures with the greatest likelihood of fracture in one of the 3 commercial lines and the lowest susceptibility to fractures in one of the experimental lines. Egg production at the hen-level did not differ between the crossbred/pure lines (P > 0.05), though an increased susceptibility to keel bone fractures was associated with thinner eggshells and reduced egg breaking strength, a pattern consistent among all tested crossbred/pure lines. Our findings suggest an association between egg quality and bone strength which appeared to be independent of crossbred/pure line. The findings indicate the benefit of the impact methodology to identify potential breeding characteristics to reduce incidence of keel fracture as well as the potential relationship with eggshell quality. © 2017 Poultry Science Association Inc.

  11. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  12. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin

    2016-12-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.

  13. Assimilating concentration observations for transport and dispersion modeling in a meandering wind field

    NASA Astrophysics Data System (ADS)

    Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.

    Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.

  14. Statistical Considerations Concerning Dissimilar Regulatory Requirements for Dissolution Similarity Assessment. The Example of Immediate-Release Dosage Forms.

    PubMed

    Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria

    2017-05-01

    When performing in vitro dissolution testing, especially in the area of biowaivers, it is necessary to follow regulatory guidelines to minimize the risk of an unsafe or ineffective product being approved. The present study examines model-independent and model-dependent methods of comparing dissolution profiles based on various compared and contrasted international guidelines. Dissolution profiles for immediate release solid oral dosage forms were generated. The test material comprised tablets containing several substances, with at least 85% of the labeled amount dissolved within 15 min, 20-30 min, or 45 min. Dissolution profile similarity can vary with regard to the following criteria: time point selection (including the last time point), coefficient of variation, and statistical method selection. Variation between regulatory guidance and statistical methods can raise methodological questions and result potentially in a different outcome when reporting dissolution profile testing. The harmonization of existing guidelines would address existing problems concerning the interpretation of regulatory recommendations and research findings. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  15. Quasi-static ensemble variational data assimilation: a theoretical and numerical study with the iterative ensemble Kalman smoother

    NASA Astrophysics Data System (ADS)

    Fillion, Anthony; Bocquet, Marc; Gratton, Serge

    2018-04-01

    The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.

  16. Early group bias in the Faroe Islands: Cultural variation in children's group-based reasoning.

    PubMed

    Schug, Mariah G; Shusterman, Anna; Barth, Hilary; Patalano, Andrea L

    2016-01-01

    Recent developmental research demonstrates that group bias emerges early in childhood. However, little is known about the extent to which bias in minimal (i.e., arbitrarily assigned) groups varies with children's environment and experience, and whether such bias is universal across cultures. In this study, the development of group bias was investigated using a minimal groups paradigm with 46 four- to six-year-olds from the Faroe Islands. Children observed in-group and out-group members exhibiting varying degrees of prosocial behaviour (egalitarian or stingy sharing). Children did not prefer their in-group in the pretest, but a pro-in-group and anti-out-group sentiment emerged in both conditions in the posttest. Faroese children's response patterns differ from those of American children [Schug, M. G., Shusterman, A., Barth, H., & Patalano, A. L. (2013). Minimal-group membership influences children's responses to novel experience with group members. Developmental Science, 16(1), 47-55], suggesting that intergroup bias shows cultural variation even in a minimal groups context.

  17. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  18. Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.

    PubMed

    Wang, Changlong; Peng, Jigen

    2018-01-01

    The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.

  19. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  20. Scheduling with non-decreasing deterioration jobs and variable maintenance activities on a single machine

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia

    2017-01-01

    There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.

  1. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  2. Detection of ɛ-ergodicity breaking in experimental data—A study of the dynamical functional sensibility

    NASA Astrophysics Data System (ADS)

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-01

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ɛ-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ɛ for ɛ-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  3. Detection of ε-ergodicity breaking in experimental data-A study of the dynamical functional sensibility.

    PubMed

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-28

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ε-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ε for ε-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  4. A free boundary approach to the Rosensweig instability of ferrofluids

    NASA Astrophysics Data System (ADS)

    Parini, Enea; Stylianou, Athanasios

    2018-04-01

    We establish the existence of saddle points for a free boundary problem describing the two-dimensional free surface of a ferrofluid undergoing normal field instability. The starting point is the ferrohydrostatic equations for the magnetic potentials in the ferrofluid and air, and the function describing their interface. These constitute the strong form for the Euler-Lagrange equations of a convex-concave functional, which we extend to include interfaces that are not necessarily graphs of functions. Saddle points are then found by iterating the direct method of the calculus of variations and applying classical results of convex analysis. For the existence part, we assume a general nonlinear magnetization law; for a linear law, we also show, via convex duality, that the saddle point is a constrained minimizer of the relevant energy functional.

  5. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  6. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  7. Infrared variation reduction by simultaneous background suppression and target contrast enhancement for deep convolutional neural network-based automatic target recognition

    NASA Astrophysics Data System (ADS)

    Kim, Sungho

    2017-06-01

    Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.

  8. RNAslider: a faster engine for consecutive windows folding and its application to the analysis of genomic folding asymmetry.

    PubMed

    Horesh, Yair; Wexler, Ydo; Lebenthal, Ilana; Ziv-Ukelson, Michal; Unger, Ron

    2009-03-04

    Scanning large genomes with a sliding window in search of locally stable RNA structures is a well motivated problem in bioinformatics. Given a predefined window size L and an RNA sequence S of size N (L < N), the consecutive windows folding problem is to compute the minimal free energy (MFE) for the folding of each of the L-sized substrings of S. The consecutive windows folding problem can be naively solved in O(NL3) by applying any of the classical cubic-time RNA folding algorithms to each of the N-L windows of size L. Recently an O(NL2) solution for this problem has been described. Here, we describe and implement an O(NLpsi(L)) engine for the consecutive windows folding problem, where psi(L) is shown to converge to O(1) under the assumption of a standard probabilistic polymer folding model, yielding an O(L) speedup which is experimentally confirmed. Using this tool, we note an intriguing directionality (5'-3' vs. 3'-5') folding bias, i.e. that the minimal free energy (MFE) of folding is higher in the native direction of the DNA than in the reverse direction of various genomic regions in several organisms including regions of the genomes that do not encode proteins or ncRNA. This bias largely emerges from the genomic dinucleotide bias which affects the MFE, however we see some variations in the folding bias in the different genomic regions when normalized to the dinucleotide bias. We also present results from calculating the MFE landscape of a mouse chromosome 1, characterizing the MFE of the long ncRNA molecules that reside in this chromosome. The efficient consecutive windows folding engine described in this paper allows for genome wide scans for ncRNA molecules as well as large-scale statistics. This is implemented here as a software tool, called RNAslider, and applied to the scanning of long chromosomes, leading to the observation of features that are visible only on a large scale.

  9. Associations between social vulnerabilities and psychosocial problems in European children. Results from the IDEFICS study.

    PubMed

    Iguacel, Isabel; Michels, Nathalie; Fernández-Alvira, Juan M; Bammann, Karin; De Henauw, Stefaan; Felső, Regina; Gwozdz, Wencke; Hunsberger, Monica; Reisch, Lucia; Russo, Paola; Tornaritis, Michael; Thumann, Barbara Franziska; Veidebaum, Toomas; Börnhorst, Claudia; Moreno, Luis A

    2017-09-01

    The effect of socioeconomic inequalities on children's mental health remains unclear. This study aims to explore the cross-sectional and longitudinal associations between social vulnerabilities and psychosocial problems, and the association between accumulation of vulnerabilities and psychosocial problems. 5987 children aged 2-9 years from eight European countries were assessed at baseline and 2-year follow-up. Two different instruments were employed to assess children's psychosocial problems: the KINDL (Questionnaire for Measuring Health-Related Quality of Life in Children and Adolescents) was used to evaluate children's well-being and the Strengths and Difficulties Questionnaire (SDQ) was used to evaluate children's internalising problems. Vulnerable groups were defined as follows: children whose parents had minimal social networks, children from non-traditional families, children of migrant origin or children with unemployed parents. Logistic mixed-effects models were used to assess the associations between social vulnerabilities and psychosocial problems. After adjusting for classical socioeconomic and lifestyle indicators, children whose parents had minimal social networks were at greater risk of presenting internalising problems at baseline and follow-up (OR 1.53, 99% CI 1.11-2.11). The highest risk for psychosocial problems was found in children whose status changed from traditional families at T0 to non-traditional families at T1 (OR 1.60, 99% CI 1.07-2.39) and whose parents had minimal social networks at both time points (OR 1.97, 99% CI 1.26-3.08). Children with one or more vulnerabilities accumulated were at a higher risk of developing psychosocial problems at baseline and follow-up. Therefore, policy makers should implement measures to strengthen the social support for parents with a minimal social network.

  10. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  11. Correction of Bowtie-Filter Normalization and Crescent Artifacts for a Clinical CBCT System.

    PubMed

    Zhang, Hong; Kong, Vic; Huang, Ke; Jin, Jian-Yue

    2017-02-01

    To present our experiences in understanding and minimizing bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a clinical cone beam computed tomography system. Bowtie-filter position and profile variations during gantry rotation were studied. Two previously proposed strategies (A and B) were applied to the clinical cone beam computed tomography system to correct bowtie-filter crescent artifacts. Physical calibration and analytical approaches were used to minimize the norm phantom misalignment and to correct for bowtie-filter normalization artifacts. A combined procedure to reduce bowtie-filter crescent artifacts and bowtie-filter normalization artifacts was proposed and tested on a norm phantom, CatPhan, and a patient and evaluated using standard deviation of Hounsfield unit along a sampling line. The bowtie-filter exhibited not only a translational shift but also an amplitude variation in its projection profile during gantry rotation. Strategy B was better than strategy A slightly in minimizing bowtie-filter crescent artifacts, possibly because it corrected the amplitude variation, suggesting that the amplitude variation plays a role in bowtie-filter crescent artifacts. The physical calibration largely reduced the misalignment-induced bowtie-filter normalization artifacts, and the analytical approach further reduced bowtie-filter normalization artifacts. The combined procedure minimized both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts, with Hounsfield unit standard deviation being 63.2, 45.0, 35.0, and 18.8 Hounsfield unit for the best correction approaches of none, bowtie-filter crescent artifacts, bowtie-filter normalization artifacts, and bowtie-filter normalization artifacts + bowtie-filter crescent artifacts, respectively. The combined procedure also demonstrated reduction of bowtie-filter crescent artifacts and bowtie-filter normalization artifacts in a CatPhan and a patient. We have developed a step-by-step procedure that can be directly used in clinical cone beam computed tomography systems to minimize both bowtie-filter crescent artifacts and bowtie-filter normalization artifacts.

  12. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  13. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  14. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  15. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  16. Moving object detection via low-rank total variation regularization

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Chen, Qian; Shao, Na

    2016-09-01

    Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.

  17. Noether's Theorem and its Inverse of Birkhoffian System in Event Space Based on Herglotz Variational Problem

    NASA Astrophysics Data System (ADS)

    Tian, X.; Zhang, Y.

    2018-03-01

    Herglotz variational principle, in which the functional is defined by a differential equation, generalizes the classical ones defining the functional by an integral. The principle gives a variational principle description of nonconservative systems even when the Lagrangian is independent of time. This paper focuses on studying the Noether's theorem and its inverse of a Birkhoffian system in event space based on the Herglotz variational problem. Firstly, according to the Herglotz variational principle of a Birkhoffian system, the principle of a Birkhoffian system in event space is established. Secondly, its parametric equations and two basic formulae for the variation of Pfaff-Herglotz action of a Birkhoffian system in event space are obtained. Furthermore, the definition and criteria of Noether symmetry of the Birkhoffian system in event space based on the Herglotz variational problem are given. Then, according to the relationship between the Noether symmetry and conserved quantity, the Noether's theorem is derived. Under classical conditions, Noether's theorem of a Birkhoffian system in event space based on the Herglotz variational problem reduces to the classical ones. In addition, Noether's inverse theorem of the Birkhoffian system in event space based on the Herglotz variational problem is also obtained. In the end of the paper, an example is given to illustrate the application of the results.

  18. Variation and Defect Tolerance for Nano Crossbars

    NASA Astrophysics Data System (ADS)

    Tunc, Cihan

    With the extreme shrinking in CMOS technology, quantum effects and manufacturing issues are getting more crucial. Hence, additional shrinking in CMOS feature size seems becoming more challenging, difficult, and costly. On the other hand, emerging nanotechnology has attracted many researchers since additional scaling down has been demonstrated by manufacturing nanowires, Carbon nanotubes as well as molecular switches using bottom-up manufacturing techniques. In addition to the progress in manufacturing, developments in architecture show that emerging nanoelectronic devices will be promising for the future system designs. Using nano crossbars, which are composed of two sets of perpendicular nanowires with programmable intersections, it is possible to implement logic functions. In addition, nano crossbars present some important features as regularity, reprogrammability, and interchangeability. Combining these features, researchers have presented different effective architectures. Although bottom-up nanofabrication can greatly reduce manufacturing costs, due to low controllability in the manufacturing process, some critical issues occur. Bottom- up nanofabrication process results in high variation compared to conventional top- down lithography used in CMOS technology. In addition, an increased failure rate is expected. Variation and defect tolerance methods used for conventional CMOS technology seem inadequate for adapting to emerging nano technology because the variation and the defect rate for emerging nano technology is much more than current CMOS technology. Therefore, variations and defect tolerance methods for emerging nano technology are necessary for a successful transition. In this work, in order to tolerate variations for crossbars, we introduce a framework that is established based on reprogrammability and interchangeability features of nano crossbars. This framework is shown to be applicable for both FET-based and diode-based nano crossbars. We present a characterization testing method which requires minimal number of test vectors. We formulate the variation optimization problem using Simulated Annealing with different optimization goals. Furthermore, we extend the framework for defect tolerance. Experimental results and comparison of proposed framework with exhaustive methods confirm its effectiveness for both variation and defect tolerance.

  19. Applying quality by design (QbD) concept for fabrication of chitosan coated nanoliposomes.

    PubMed

    Pandey, Abhijeet P; Karande, Kiran P; Sonawane, Raju O; Deshmukh, Prashant K

    2014-03-01

    In the present investigation, a quality by design (QbD) strategy was successfully applied to the fabrication of chitosan-coated nanoliposomes (CH-NLPs) encapsulating a hydrophilic drug. The effects of the processing variables on the particle size, encapsulation efficiency (%EE) and coating efficiency (%CE) of CH-NLPs (prepared using a modified ethanol injection method) were investigated. The concentrations of lipid, cholesterol, drug and chitosan; stirring speed, sonication time; organic:aqueous phase ratio; and temperature were identified as the key factors after risk analysis for conducting a screening design study. A separate study was designed to investigate the robustness of the predicted design space. The particle size, %EE and %CE of the optimized CH-NLPs were 111.3 nm, 33.4% and 35.2%, respectively. The observed responses were in accordance with the predicted response, which confirms the suitability and robustness of the design space for CH-NLP formulation. In conclusion, optimization of the selected key variables will help minimize the problems related to size, %EE and %CE that are generally encountered when scaling up processes for NLP formulations. The robustness of the design space will help minimize both intra-batch and inter-batch variations, which are quite common in the pharmaceutical industry.

  20. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  1. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  2. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  3. Family stressors, home demands and responsibilities, coping resources, social connectedness, and Thai older adult health problems: examining gender variations.

    PubMed

    Krishnakumar, Ambika; Narine, Lutchmie; Soonthorndhada, Amara; Thianlai, Kanchana

    2015-03-01

    To examine gender variations in the linkages among family stressors, home demands and responsibilities, coping resources, social connectedness, and older adult health problems. Data were collected from 3,800 elderly participants (1,654 men and 2,146 women) residing in Kanchanaburi province, Thailand. Findings indicated gender variations in the levels of these constructs and in the mediational pathways. Thai women indicated greater health problems than men. Emotional empathy was the central variable that linked financial strain, home demands and responsibilities, and older adult health problems through social connectedness. Financial strain (and negative life events for women) was associated with lowered coping self-efficacy and increased health problems. The model indicated greater strength in predicting female health problems. Findings support gender variations in the relationships between ecological factors and older adult health problems. © The Author(s) 2014.

  4. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  5. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  6. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  7. Quality of ground water in Idaho

    USGS Publications Warehouse

    Yee, Johnson J.; Souza, William R.

    1987-01-01

    The major aquifers in Idaho are categorized under two rock types, sedimentary and volcanic, and are grouped into six hydrologic basins. Areas with adequate, minimally adequate, or deficient data available for groundwater-quality evaluations are described. Wide variations in chemical concentrations in the water occur within individual aquifers, as well as among the aquifers. The existing data base is not sufficient to describe fully the ground-water quality throughout the State; however, it does indicate that the water is generally suitable for most uses. In some aquifers, concentrations of fluoride, cadmium, and iron in the water exceed the U.S. Environmental Protection Agency's drinking-water standards. Dissolved solids, chloride, and sulfate may cause problems in some local areas. Water-quality data are sparse in many areas, and only general statements can be made regarding the areal distribution of chemical constituents. Few data are available to describe temporal variations of water quality in the aquifers. Primary concerns related to special problem areas in Idaho include (1) protection of water quality in the Rathdrum Prairie aquifer, (2) potential degradation of water quality in the Boise-Nampa area, (3) effects of widespread use of drain wells overlying the eastern Snake River Plain basalt aquifer, and (4) disposal of low-level radioactive wastes at the Idaho National Engineering Laboratory. Shortcomings in the ground-water-quality data base are categorized as (1) multiaquifer sample inadequacy, (2) constituent coverage limitations, (3) baseline-data deficiencies, and (4) data-base nonuniformity.

  8. Copy number variants calling for single cell sequencing data by multi-constrained optimization.

    PubMed

    Xu, Bo; Cai, Hongmin; Zhang, Changsheng; Yang, Xi; Han, Guoqiang

    2016-08-01

    Variations in DNA copy number carry important information on genome evolution and regulation of DNA replication in cancer cells. The rapid development of single-cell sequencing technology allows one to explore gene expression heterogeneity among single-cells, thus providing important cancer cell evolution information. Single-cell DNA/RNA sequencing data usually have low genome coverage, which requires an extra step of amplification to accumulate enough samples. However, such amplification will introduce large bias and makes bioinformatics analysis challenging. Accurately modeling the distribution of sequencing data and effectively suppressing the bias influence is the key to success variations analysis. Recent advances demonstrate the technical noises by amplification are more likely to follow negative binomial distribution, a special case of Poisson distribution. Thus, we tackle the problem CNV detection by formulating it into a quadratic optimization problem involving two constraints, in which the underling signals are corrupted by Poisson distributed noises. By imposing the constraints of sparsity and smoothness, the reconstructed read depth signals from single-cell sequencing data are anticipated to fit the CNVs patterns more accurately. An efficient numerical solution based on the classical alternating direction minimization method (ADMM) is tailored to solve the proposed model. We demonstrate the advantages of the proposed method using both synthetic and empirical single-cell sequencing data. Our experimental results demonstrate that the proposed method achieves excellent performance and high promise of success with single-cell sequencing data. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  9. Evaluation of Brief Group-Administered Instruction for Parents to Prevent or Minimize Sleep Problems in Young Children with Down Syndrome

    ERIC Educational Resources Information Center

    Stores, Rebecca; Stores, Gregory

    2004-01-01

    Background: The study concerns the unknown value of group instruction for mothers of young children with Down syndrome (DS) in preventing or minimizing sleep problems. Method: (1) Children with DS were randomly allocated to an Instruction group (given basic information about children's sleep) and a Control group for later comparison including…

  10. Does Self-Help Increase Rates of Help Seeking for Student Mental Health Problems by Minimizing Stigma as a Barrier?

    ERIC Educational Resources Information Center

    Levin, Michael E.; Krafft, Jennifer; Levin, Crissa

    2018-01-01

    Objective: This study examined whether self-help (books, websites, mobile apps) increases help seeking for mental health problems among college students by minimizing stigma as a barrier. Participants and Methods: A survey was conducted with 200 college students reporting elevated distress from February to April 2017. Results: Intentions to use…

  11. On a variational approach to some parameter estimation problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.

    1985-01-01

    Examples (1-D seismic, large flexible structures, bioturbation, nonlinear population dispersal) in which a variation setting can provide a convenient framework for convergence and stability arguments in parameter estimation problems are considered. Some of these examples are 1-D seismic, large flexible structures, bioturbation, and nonlinear population dispersal. Arguments for convergence and stability via a variational approach of least squares formulations of parameter estimation problems for partial differential equations is one aspect of the problem considered.

  12. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  13. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  14. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  15. Search Problems in Mission Planning and Navigation of Autonomous Aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Krozel, James A.

    1988-01-01

    An architecture for the control of an autonomous aircraft is presented. The architecture is a hierarchical system representing an anthropomorphic breakdown of the control problem into planner, navigator, and pilot systems. The planner system determines high level global plans from overall mission objectives. This abstract mission planning is investigated by focusing on the Traveling Salesman Problem with variations on local and global constraints. Tree search techniques are applied including the breadth first, depth first, and best first algorithms. The minimum-column and row entries for the Traveling Salesman Problem cost matrix provides a powerful heuristic to guide these search techniques. Mission planning subgoals are directed from the planner to the navigator for planning routes in mountainous terrain with threats. Terrain/threat information is abstracted into a graph of possible paths for which graph searches are performed. It is shown that paths can be well represented by a search graph based on the Voronoi diagram of points representing the vertices of mountain boundaries. A comparison of Dijkstra's dynamic programming algorithm and the A* graph search algorithm from artificial intelligence/operations research is performed for several navigation path planning examples. These examples illustrate paths that minimize a combination of distance and exposure to threats. Finally, the pilot system synthesizes the flight trajectory by creating the control commands to fly the aircraft.

  16. Variable Pitch Darrieus Water Turbines

    NASA Astrophysics Data System (ADS)

    Kirke, Brian; Lazauskas, Leo

    In recent years the Darrieus wind turbine concept has been adapted for use in water, either as a hydrokinetic turbine converting the kinetic energy of a moving fluid in open flow like an underwater wind turbine, or in a low head or ducted arrangement where flow is confined, streamtube expansion is controlled and efficiency is not subject to the Betz limit. Conventional fixed pitch Darrieus turbines suffer from two drawbacks, (i) low starting torque and (ii) shaking due to cyclical variations in blade angle of attack. Ventilation and cavitation can also cause problems in water turbines when blade velocities are high. Shaking can be largely overcome by the use of helical blades, but these do not produce large starting torque. Variable pitch can produce high starting torque and high efficiency, and by suitable choice of pitch regime, shaking can be minimized but not entirely eliminated. Ventilation can be prevented by avoiding operation close to a free surface, and cavitation can be prevented by limiting blade velocities. This paper summarizes recent developments in Darrieus water turbines, some problems and some possible solutions.

  17. Smoothing of Transport Plans with Fixed Marginals and Rigorous Semiclassical Limit of the Hohenberg-Kohn Functional

    NASA Astrophysics Data System (ADS)

    Cotar, Codina; Friesecke, Gero; Klüppelberg, Claudia

    2018-06-01

    We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N = 2 ( Cotar etal. in Commun Pure Appl Math 66:548-599, 2013). The correct limit problem has been derived in the physics literature by Seidl (Phys Rev A 60 4387-4395, 1999) and Seidl, Gorigiorgi and Savin (Phys Rev A 75:042511 1-12, 2007); in these papers the lack of a rigorous proofwas pointed out.We also give amathematical counterexample to this type of result, by replacing the constraint of given one-body density—an infinite dimensional quadratic expression in the wavefunction—by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.

  18. Detection Copy Number Variants from NGS with Sparse and Smooth Constraints.

    PubMed

    Zhang, Yue; Cheung, Yiu-Ming; Xu, Bo; Su, Weifeng

    2017-01-01

    It is known that copy number variations (CNVs) are associated with complex diseases and particular tumor types, thus reliable identification of CNVs is of great potential value. Recent advances in next generation sequencing (NGS) data analysis have helped manifest the richness of CNV information. However, the performances of these methods are not consistent. Reliably finding CNVs in NGS data in an efficient way remains a challenging topic, worthy of further investigation. Accordingly, we tackle the problem by formulating CNVs identification into a quadratic optimization problem involving two constraints. By imposing the constraints of sparsity and smoothness, the reconstructed read depth signal from NGS is anticipated to fit the CNVs patterns more accurately. An efficient numerical solution tailored from alternating direction minimization (ADM) framework is elaborated. We demonstrate the advantages of the proposed method, namely ADM-CNV, by comparing it with six popular CNV detection methods using synthetic, simulated, and empirical sequencing data. It is shown that the proposed approach can successfully reconstruct CNV patterns from raw data, and achieve superior or comparable performance in detection of the CNVs compared to the existing counterparts.

  19. Cognition-emotion interactions: patterns of change and implications for math problem solving

    PubMed Central

    Trezise, Kelly; Reeve, Robert A.

    2014-01-01

    Surprisingly little is known about whether relationships between cognitive and emotional states remain stable or change over time, or how different patterns of stability and/or change in the relationships affect problem solving abilities. Nevertheless, cross-sectional studies show that anxiety/worry may reduce working memory (WM) resources, and the ability to minimize the effects anxiety/worry is higher in individuals with greater WM capacity. To investigate the patterns of stability and/or change in cognition-emotion relations over time and their implications for problem solving, 126 14-year-olds’ algebraic WM and worry levels were assessed twice in a single day before completing an algebraic math problem solving test. We used latent transition analysis to identify stability/change in cognition-emotion relations, which yielded a six subgroup solution. Subgroups varied in WM capacity, worry, and stability/change relationships. Among the subgroups, we identified a high WM/low worry subgroup that remained stable over time and a high WM/high worry, and a moderate WM/low worry subgroup that changed to low WM subgroups over time. Patterns of stability/change in subgroup membership predicted algebraic test results. The stable high WM/low worry subgroup performed best and the low WM capacity-high worry “unstable across time” subgroup performed worst. The findings highlight the importance of assessing variations in cognition-emotion relationships over time (rather than assessing cognition or emotion states alone) to account for differences in problem solving abilities. PMID:25132830

  20. A videoscope for use in minimally invasive periodontal surgery.

    PubMed

    Harrel, Stephen K; Wilson, Thomas G; Rivera-Hidalgo, Francisco

    2013-09-01

    Minimally invasive periodontal procedures have been reported to produce excellent clinical results. Visualization during minimally invasive procedures has traditionally been obtained by the use of surgical telescopes, surgical microscopes, glass fibre endoscopes or a combination of these devices. All of these methods for visualization are less than fully satisfactory due to problems with access, magnification and blurred imaging. A videoscope for use with minimally invasive periodontal procedures has been developed to overcome some of the difficulties that exist with current visualization approaches. This videoscope incorporates a gas shielding technology that eliminates the problems of fogging and fouling of the optics of the videoscope that has previously prevented the successful application of endoscopic visualization to periodontal surgery. In addition, as part of the gas shielding technology the videoscope also includes a moveable retractor specifically adapted for minimally invasive surgery. The clinical use of the videoscope during minimally invasive periodontal surgery is demonstrated and discussed. The videoscope with gas shielding alleviates many of the difficulties associated with visualization during minimally invasive periodontal surgery. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  2. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  3. Retrospectively reported month-to-month variation in sleeping problems of people naturally exposed to high-amplitude annual variation in daylength and/or temperature.

    PubMed

    Putilov, Arcady A

    2017-01-01

    Compared to literature on seasonal variation in mood and well-being, reports on seasonality of trouble sleeping are scarce and contradictive. To extend geography of such reports on example of people naturally exposed to high-amplitude annual variation in daylength and/or temperature. Participants were the residents of Turkmenia, West Siberia, South and North Yakutia, Chukotka, and Alaska. Health and sleep-wake adaptabilities, month-to-month variation in sleeping problems, well-being and behaviors were self-assessed. More than a half of 2398 respondents acknowledged seasonality of sleeping problems. Four of the assessed sleeping problems demonstrated three different patterns of seasonal variation. Rate of the problems significantly increased in winter months with long nights and cold days (daytime sleepiness and difficulties falling and staying asleep) as well as in summer months with either long days (premature awakening and difficulties falling and staying asleep) or hot nights and days (all 4 sleeping problems). Individual differences between respondents in pattern and level of seasonality of sleeping problems were significantly associated with differences in several other domains of individual variation, such as gender, age, ethnicity, physical health, morning-evening preference, sleep quality, and adaptability of the sleep-wake cycle. These results have practical relevance to understanding of the roles playing by natural environmental factors in seasonality of sleeping problems as well as to research on prevalence of sleep disorders and methods of their prevention and treatment in regions with large seasonal differences in temperature and daylength.

  4. Seasonal species interactions minimize the impact of species turnover on the likelihood of community persistence.

    PubMed

    Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi

    2016-04-01

    Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures.

  5. Adoption of waste minimization technology to benefit electroplaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K.

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution controlmore » measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.« less

  6. Adaptive Particle Swarm Optimizer with Varying Acceleration Coefficients for Finding the Most Stable Conformer of Small Molecules.

    PubMed

    Agrawal, Shikha; Silakari, Sanjay; Agrawal, Jitendra

    2015-11-01

    A novel parameter automation strategy for Particle Swarm Optimization called APSO (Adaptive PSO) is proposed. The algorithm is designed to efficiently control the local search and convergence to the global optimum solution. Parameters c1 controls the impact of the cognitive component on the particle trajectory and c2 controls the impact of the social component. Instead of fixing the value of c1 and c2 , this paper updates the value of these acceleration coefficients by considering time variation of evaluation function along with varying inertia weight factor in PSO. Here the maximum and minimum value of evaluation function is use to gradually decrease and increase the value of c1 and c2 respectively. Molecular energy minimization is one of the most challenging unsolved problems and it can be formulated as a global optimization problem. The aim of the present paper is to investigate the effect of newly developed APSO on the highly complex molecular potential energy function and to check the efficiency of the proposed algorithm to find the global minimum of the function under consideration. The proposed algorithm APSO is therefore applied in two cases: Firstly, for the minimization of a potential energy of small molecules with up to 100 degrees of freedom and finally for finding the global minimum energy conformation of 1,2,3-trichloro-1-flouro-propane molecule based on a realistic potential energy function. The computational results of all the cases show that the proposed method performs significantly better than the other algorithms. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  8. A parallel process growth mixture model of conduct problems and substance use with risky sexual behavior.

    PubMed

    Wu, Johnny; Witkiewitz, Katie; McMahon, Robert J; Dodge, Kenneth A

    2010-10-01

    Conduct problems, substance use, and risky sexual behavior have been shown to coexist among adolescents, which may lead to significant health problems. The current study was designed to examine relations among these problem behaviors in a community sample of children at high risk for conduct disorder. A latent growth model of childhood conduct problems showed a decreasing trend from grades K to 5. During adolescence, four concurrent conduct problem and substance use trajectory classes were identified (high conduct problems and high substance use, increasing conduct problems and increasing substance use, minimal conduct problems and increasing substance use, and minimal conduct problems and minimal substance use) using a parallel process growth mixture model. Across all substances (tobacco, binge drinking, and marijuana use), higher levels of childhood conduct problems during kindergarten predicted a greater probability of classification into more problematic adolescent trajectory classes relative to less problematic classes. For tobacco and binge drinking models, increases in childhood conduct problems over time also predicted a greater probability of classification into more problematic classes. For all models, individuals classified into more problematic classes showed higher proportions of early sexual intercourse, infrequent condom use, receiving money for sexual services, and ever contracting an STD. Specifically, tobacco use and binge drinking during early adolescence predicted higher levels of sexual risk taking into late adolescence. Results highlight the importance of studying the conjoint relations among conduct problems, substance use, and risky sexual behavior in a unified model. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  10. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  11. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hojin; Becker, Stephen; Lee, Rena

    2013-07-15

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less

  12. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  13. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  14. The Variation Theorem Applied to H-2+: A Simple Quantum Chemistry Computer Project

    ERIC Educational Resources Information Center

    Robiette, Alan G.

    1975-01-01

    Describes a student project which requires limited knowledge of Fortran and only minimal computing resources. The results illustrate such important principles of quantum mechanics as the variation theorem and the virial theorem. Presents sample calculations and the subprogram for energy calculations. (GS)

  15. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  16. Geometric versus numerical optimal control of a dissipative spin-(1/2) particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapert, M.; Sugny, D.; Zhang, Y.

    2010-12-15

    We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derrida, B.; Spohn, H.

    We show that the problem of a directed polymer on a tree with disorder can be reduced to the study of nonlinear equations of reaction-diffusion type. These equations admit traveling wave solutions that move at all possible speeds above a certain minimal speed. The speed of the wavefront is the free energy of the polymer problem and the minimal speed corresponds to a phase transition to a glassy phase similar to the spin-glass phase. Several properties of the polymer problem can be extracted from the correspondence with the traveling wave: probability distribution of the free energy, overlaps, etc.

  18. Equilibrium of fluid membranes endowed with orientational order

    NASA Astrophysics Data System (ADS)

    Kumar Alageshan, Jaya; Chakrabarti, Buddhapriya; Hatwalne, Yashodhan

    2017-04-01

    Minimization of the low-temperature elastic free-energy functional of orientationlly ordered membranes involves independent variation of the membrane-shape, while keeping the orientational order on it (its texture) fixed. We propose an operational, coordinate-independent method for implementing such a variation. Using the Nelson-Peliti formulation of elasticity that emphasizes the interplay between geometry, topology, and thermal fluctuations of orientationally ordered membranes, we minimize the elastic free energy to obtain equations governing their equilibrium shape, together with associated free boundary conditions. Our results are essential for understanding and predicting equilibrium shapes as well as textures of membranes and vesicles; particularly under conditions in which shape deformations are large.

  19. Regional variation in water-related impacts of shale gas development and implications for emerging international plays.

    PubMed

    Mauter, Meagan S; Alvarez, Pedro J J; Burton, Allen; Cafaro, Diego C; Chen, Wei; Gregory, Kelvin B; Jiang, Guibin; Li, Qilin; Pittock, Jamie; Reible, Danny; Schnoor, Jerald L

    2014-01-01

    The unconventional fossil fuel industry is expected to expand dramatically in coming decades as conventional reserves wane. Minimizing the environmental impacts of this energy transition requires a contextualized understanding of the unique regional issues that shale gas development poses. This manuscript highlights the variation in regional water issues associated with shale gas development in the U.S. and the approaches of various states in mitigating these impacts. The manuscript also explores opportunities for emerging international shale plays to leverage the diverse experiences of U.S. states in formulating development strategies that minimize water-related impacts within their environmental, cultural, and political ecosystem.

  20. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  1. The inverse problem of the calculus of variations for discrete systems

    NASA Astrophysics Data System (ADS)

    Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David

    2018-05-01

    We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.

  2. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  3. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  4. Optimal leveling of flow over one-dimensional topography by Marangoni stresses

    NASA Astrophysics Data System (ADS)

    Gramlich, C. M.; Kalliadasis, Serafim; Homsy, G. M.; Messer, C.

    2002-06-01

    A thin viscous film flowing over a step down in topography exhibits a capillary ridge preceding the step. In applications, a planar liquid surface is often desired and hence there is a need to level the ridge. This paper investigates optimal leveling of the ridge by means of a Marangoni stress such as might be produced by a localized heater creating temperature variations at the film surface. The differential equation for the free surface based on lubrication theory and incorporating the effects of topography and temperature gradients is solved numerically for steps down in topography with different temperature profiles. Both rectangular "top-hat" and parabolic profiles, chosen to model physically realizable heaters, were found to be effective in reducing the height of the capillary ridge. Leveling the ridge is formulated as an optimization problem to minimize the maximum free-surface height by varying the heater strength, position, and width. With the optimized heaters, the variation in surface height is reduced by more than 50% compared to the original isothermal ridge. For more effective leveling, we consider an asymmetric n-step temperature distribution. The optimal n-step heater in this case results in (n+1) ridges of equal size; 2- and 3-step heaters reduce the variation in surface height by about 70% and 77%, respectively. Finally, we explore the potential of coolers and step temperature profiles for still more effective leveling.

  5. Global Survey Method for the World Network of Neutron Monitors

    NASA Astrophysics Data System (ADS)

    Belov, A. V.; Eroshenko, E. A.; Yanke, V. G.; Oleneva, V. A.; Abunina, M. A.; Abunin, A. A.

    2018-05-01

    One of the variants of the global survey method developed and used for many years at the Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation of the Russian Academy of Sciences is described. Data from the world network of neutron monitors for every hour from July 1957 to the present has been processed by this method. A consistent continuous series of hourly characteristics of variation of the density and vector anisotropy of cosmic rays with a rigidity of 10 GV is obtained. A database of Forbush decreases in galactic cosmic rays caused by large-scale disturbances of the interplanetary medium for more than half a century has been created based on this series. The capabilities of the database make it possible to perform a correlation analysis of various parameters of the space environment (characteristics of the Sun, solar wind, and interplanetary magnetic field) with the parameters of cosmic rays and to study their interrelationships in the solar-terrestrial space. The features of reception coefficients for different stations are considered, which allows the transition from variations according to ground measurements to variations of primary cosmic rays. The advantages and disadvantages of this variant of the global survey method and the opportunities for its development and improvement are assessed. The developed method makes it possible to minimize the problems of the network of neutron monitors and to make significant use of its advantages.

  6. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  7. Genetic similarity between Taenia solium cysticerci collected from the two distant endemic areas in North and North East India.

    PubMed

    Sharma, Monika; Devi, Kangjam Rekha; Sehgal, Rakesh; Narain, Kanwar; Mahanta, Jagadish; Malla, Nancy

    2014-01-01

    Taenia solium taeniasis/cysticercosis is a major public health problem in developing countries. This study reports genotypic analysis of T. solium cysticerci collected from two different endemic areas of North (Chandigarh) and North East India (Dibrugarh) by the sequencing of mitochondrial cytochrome c oxidase subunit 1 (cox1) gene. The variation in cox1 sequences of samples collected from these two different geographical regions located at a distance of 2585 km was minimal. Alignment of the nucleotide sequences with different species of Taenia showed the similarity with Asian genotype of T. solium. Among 50 isolates, 6 variant nucleotide positions (0.37% of total length) were detected. These results suggest that population in these geographical areas are homogenous. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. A niching genetic algorithm applied to optimize a SiC-bulk crystal growth system

    NASA Astrophysics Data System (ADS)

    Su, Juan; Chen, Xuejiang; Li, Yuan; Pons, Michel; Blanquet, Elisabeth

    2017-06-01

    A niching genetic algorithm (NGA) was presented to optimize a SiC-bulk crystal growth system by PVT. The NGA based on clearing mechanism and its combination method with heat transfer model for SiC crystal growth were described in details. Then three inverse problems for optimization of growth system were carried out by NGA. Firstly, the radius of blind hole was optimized to decrease the radial temperature gradient along the substrate while the center temperature on the surface of substrate is fixed at 2500 K. Secondly, insulation materials with anisotropic thermal conductivities were selected to obtain much higher growth rate as 600, 800 and 1000 μm/h. Finally, the density of coils was also rearranged to minimize the temperature variation in the SiC powder. All the results were analyzed and discussed.

  9. An optical fiber spool for laser stabilization with reduced acceleration sensitivity to 10-12/g

    NASA Astrophysics Data System (ADS)

    Hu, Yong-Qi; Dong, Jing; Huang, Jun-Chao; Li, Tang; Liu, Liang

    2015-10-01

    Environmental vibration causes mechanical deformation in optical fibers, which induces excess frequency noise in fiber-stabilized lasers. In order to solve such a problem, we propose an ultralow acceleration sensitivity fiber spool with symmetrically mounted structure. By numerical analysis with the finite element method, we obtain the optimal geometry parameters of the spool with which the horizontal and vertical acceleration sensitivity can be reduced to 3.25 × 10-12/g and 5.38 × 10-12/g respectively. Moreover, the structure features the insensitivity to the variation of geometry parameters, which will minimize the influence from numerical simulation error and manufacture tolerance. Project supported by the National Natural Science Foundation of China (Grant Nos. 11034008 and 11274324) and the Key Research Program of the Chinese Academy of Sciences (Grant No. KJZD-EW-W02).

  10. W-MAC: A Workload-Aware MAC Protocol for Heterogeneous Convergecast in Wireless Sensor Networks

    PubMed Central

    Xia, Ming; Dong, Yabo; Lu, Dongming

    2011-01-01

    The power consumption and latency of existing MAC protocols for wireless sensor networks (WSNs) are high in heterogeneous convergecast, where each sensor node generates different amounts of data in one convergecast operation. To solve this problem, we present W-MAC, a workload-aware MAC protocol for heterogeneous convergecast in WSNs. A subtree-based iterative cascading scheduling mechanism and a workload-aware time slice allocation mechanism are proposed to minimize the power consumption of nodes, while offering a low data latency. In addition, an efficient schedule adjustment mechanism is provided for adapting to data traffic variation and network topology change. Analytical and simulation results show that the proposed protocol provides a significant energy saving and latency reduction in heterogeneous convergecast, and can effectively support data aggregation to further improve the performance. PMID:22163753

  11. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  12. Principal Eigenvalue Minimization for an Elliptic Problem with Indefinite Weight and Robin Boundary Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hintermueller, M., E-mail: hint@math.hu-berlin.de; Kao, C.-Y., E-mail: Ckao@claremontmckenna.edu; Laurain, A., E-mail: laurain@math.hu-berlin.de

    2012-02-15

    This paper focuses on the study of a linear eigenvalue problem with indefinite weight and Robin type boundary conditions. We investigate the minimization of the positive principal eigenvalue under the constraint that the absolute value of the weight is bounded and the total weight is a fixed negative constant. Biologically, this minimization problem is motivated by the question of determining the optimal spatial arrangement of favorable and unfavorable regions for a species to survive. For rectangular domains with Neumann boundary condition, it is known that there exists a threshold value such that if the total weight is below this thresholdmore » value then the optimal favorable region is like a section of a disk at one of the four corners; otherwise, the optimal favorable region is a strip attached to the shorter side of the rectangle. Here, we investigate the same problem with mixed Robin-Neumann type boundary conditions and study how this boundary condition affects the optimal spatial arrangement.« less

  13. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  14. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  15. Variational estimate method for solving autonomous ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2018-04-01

    In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.

  16. Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.

  17. Minimal perceptrons for memorizing complex patterns

    NASA Astrophysics Data System (ADS)

    Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo

    2016-11-01

    Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.

  18. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  19. Generalized vector calculus on convex domain

    NASA Astrophysics Data System (ADS)

    Agrawal, Om P.; Xu, Yufeng

    2015-06-01

    In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.

  20. Preschool Language Variation, Growth, and Predictors in Children on the Autism Spectrum

    ERIC Educational Resources Information Center

    Ellis Weismer, Susan; Kover, Sara T.

    2015-01-01

    Background: There is wide variation in language abilities among young children with autism spectrum disorders (ASD), with some toddlers developing age-appropriate language while others remain minimally verbal after age 5. Conflicting findings exist regarding predictors of language outcomes in ASD and various methodological issues limit the…

  1. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mory, Cyril, E-mail: cyril.mory@philips.com; Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes; Auvray, Vincent

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method,more » which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.« less

  2. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  3. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1989-01-01

    Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.

  4. Energy aware path planning in complex four dimensional environments

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Anjan

    This dissertation addresses the problem of energy-aware path planning for small autonomous vehicles. While small autonomous vehicles can perform missions that are too risky (or infeasible) for larger vehicles, the missions are limited by the amount of energy that can be carried on board the vehicle. Path planning techniques that either minimize energy consumption or exploit energy available in the environment can thus increase range and endurance. Path planning is complicated by significant spatial (and potentially temporal) variations in the environment. While the main focus is on autonomous aircraft, this research also addresses autonomous ground vehicles. Range and endurance of small unmanned aerial vehicles (UAVs) can be greatly improved by utilizing energy from the atmosphere. Wind can be exploited to minimize energy consumption of a small UAV. But wind, like any other atmospheric component , is a space and time varying phenomenon. To effectively use wind for long range missions, both exploration and exploitation of wind is critical. This research presents a kinematics based tree algorithm which efficiently handles the four dimensional (three spatial and time) path planning problem. The Kinematic Tree algorithm provides a sequence of waypoints, airspeeds, heading and bank angle commands for each segment of the path. The planner is shown to be resolution complete and computationally efficient. Global optimality of the cost function cannot be claimed, as energy is gained from the atmosphere, making the cost function inadmissible. However the Kinematic Tree is shown to be optimal up to resolution if the cost function is admissible. Simulation results show the efficacy of this planning method for a glider in complex real wind data. Simulation results verify that the planner is able to extract energy from the atmosphere enabling long range missions. The Kinematic Tree planning framework, developed to minimize energy consumption of UAVs, is applied for path planning in ground robots. In traditional path planning problem the focus is on obstacle avoidance and navigation. The optimal Kinematic Tree algorithm named Kinematic Tree* is shown to find optimal paths to reach the destination while avoiding obstacles. A more challenging path planning scenario arises for planning in complex terrain. This research shows how the Kinematic Tree* algorithm can be extended to find minimum energy paths for a ground vehicle in difficult mountainous terrain.

  5. Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr

    2013-02-15

    The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.

  6. Fermion systems in discrete space-time

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  7. Natural abundance deuterium and 18-oxygen effects on the precision of the doubly labeled water method

    NASA Technical Reports Server (NTRS)

    Horvitz, M. A.; Schoeller, D. A.

    2001-01-01

    The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.

  8. Natural abundance deuterium and 18-oxygen effects on the precision of the doubly labeled water method.

    PubMed

    Horvitz, M A; Schoeller, D A

    2001-06-01

    The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.

  9. σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States

    NASA Astrophysics Data System (ADS)

    Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).

  10. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  11. Interferometric synthetic aperture radar phase unwrapping based on sparse Markov random fields by graph cuts

    NASA Astrophysics Data System (ADS)

    Zhou, Lifan; Chai, Dengfeng; Xia, Yu; Ma, Peifeng; Lin, Hui

    2018-01-01

    Phase unwrapping (PU) is one of the key processes in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) data. It is known that two-dimensional (2-D) PU problems can be formulated as maximum a posteriori estimation of Markov random fields (MRFs). However, considering that the traditional MRF algorithm is usually defined on a rectangular grid, it fails easily if large parts of the wrapped data are dominated by noise caused by large low-coherence area or rapid-topography variation. A PU solution based on sparse MRF is presented to extend the traditional MRF algorithm to deal with sparse data, which allows the unwrapping of InSAR data dominated by high phase noise. To speed up the graph cuts algorithm for sparse MRF, we designed dual elementary graphs and merged them to obtain the Delaunay triangle graph, which is used to minimize the energy function efficiently. The experiments on simulated and real data, compared with other existing algorithms, both confirm the effectiveness of the proposed MRF approach, which suffers less from decorrelation effects caused by large low-coherence area or rapid-topography variation.

  12. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  13. Phonological Neighborhood Effects in Spoken Word Production: An fMRI Study

    ERIC Educational Resources Information Center

    Peramunage, Dasun; Blumstein, Sheila E.; Myers, Emily B.; Goldrick, Matthew; Baese-Berk, Melissa

    2011-01-01

    The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the…

  14. Diversity is maintained by seasonal variation in species abundance

    PubMed Central

    2013-01-01

    Background Some of the most marked temporal fluctuations in species abundances are linked to seasons. In theory, multispecies assemblages can persist if species use shared resources at different times, thereby minimizing interspecific competition. However, there is scant empirical evidence supporting these predictions and, to the best of our knowledge, seasonal variation has never been explored in the context of fluctuation-mediated coexistence. Results Using an exceptionally well-documented estuarine fish assemblage, sampled monthly for over 30 years, we show that temporal shifts in species abundances underpin species coexistence. Species fall into distinct seasonal groups, within which spatial resource use is more heterogeneous than would be expected by chance at those times when competition for food is most intense. We also detect seasonal variation in the richness and evenness of the community, again linked to shifts in resource availability. Conclusions These results reveal that spatiotemporal shifts in community composition minimize competitive interactions and help stabilize total abundance. PMID:24007204

  15. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  16. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  17. Competitive two-agent scheduling problems to minimize the weighted combination of makespans in a two-machine open shop

    NASA Astrophysics Data System (ADS)

    Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia

    2018-04-01

    In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.

  18. Two hybrid compaction algorithms for the layout optimization problem.

    PubMed

    Xiao, Ren-Bin; Xu, Yi-Chun; Amos, Martyn

    2007-01-01

    In this paper we present two new algorithms for the layout optimization problem: this concerns the placement of circular, weighted objects inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. We present two nature-inspired algorithms for this problem, the first based on simulated annealing, and the second on particle swarm optimization. We compare our algorithms with the existing best-known algorithm, and show that our approaches out-perform it in terms of both solution quality and execution time.

  19. [Minimal emotional dysfunction and first impression formation in personality disorders].

    PubMed

    Linden, M; Vilain, M

    2011-01-01

    "Minimal cerebral dysfunctions" are isolated impairments of basic mental functions, which are elements of complex functions like speech. The best described are cognitive dysfunctions such as reading and writing problems, dyscalculia, attention deficits, but also motor dysfunctions such as problems with articulation, hyperactivity or impulsivity. Personality disorders can be characterized by isolated emotional dysfunctions in relation to emotional adequacy, intensity and responsivity. For example, paranoid personality disorders can be characterized by continuous and inadequate distrust, as a disorder of emotional adequacy. Schizoid personality disorders can be characterized by low expressive emotionality, as a disorder of effect intensity, or dissocial personality disorders can be characterized by emotional non-responsivity. Minimal emotional dysfunctions cause interactional misunderstandings because of the psychology of "first impression formation". Studies have shown that in 100 ms persons build up complex and lasting emotional judgements about other persons. Therefore, minimal emotional dysfunctions result in interactional problems and adjustment disorders and in corresponding cognitive schemata.From the concept of minimal emotional dysfunctions specific psychotherapeutic interventions in respect to the patient-therapist relationship, the diagnostic process, the clarification of emotions and reality testing, and especially an understanding of personality disorders as impairment and "selection, optimization, and compensation" as a way of coping can be derived.

  20. Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells

    PubMed Central

    2014-01-01

    A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886

  1. Gardner's Two Children Problems and Variations: Puzzles with Conditional Probability and Sample Spaces

    ERIC Educational Resources Information Center

    Taylor, Wendy; Stacey, Kaye

    2014-01-01

    This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…

  2. Singular optimal control and the identically non-regular problem in the calculus of variations

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1985-01-01

    A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.

  3. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  4. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  5. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  6. Missing data and technical variability in single-cell RNA-sequencing experiments.

    PubMed

    Hicks, Stephanie C; Townes, F William; Teng, Mingxiang; Irizarry, Rafael A

    2017-11-06

    Until recently, high-throughput gene expression technology, such as RNA-Sequencing (RNA-seq) required hundreds of thousands of cells to produce reliable measurements. Recent technical advances permit genome-wide gene expression measurement at the single-cell level. Single-cell RNA-Seq (scRNA-seq) is the most widely used and numerous publications are based on data produced with this technology. However, RNA-seq and scRNA-seq data are markedly different. In particular, unlike RNA-seq, the majority of reported expression levels in scRNA-seq are zeros, which could be either biologically-driven, genes not expressing RNA at the time of measurement, or technically-driven, genes expressing RNA, but not at a sufficient level to be detected by sequencing technology. Another difference is that the proportion of genes reporting the expression level to be zero varies substantially across single cells compared to RNA-seq samples. However, it remains unclear to what extent this cell-to-cell variation is being driven by technical rather than biological variation. Furthermore, while systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies, these issues have received minimal attention in published studies based on scRNA-seq technology. Here, we use an assessment experiment to examine data from published studies and demonstrate that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability. Specifically, we present evidence that some of these reported zeros are driven by technical variation by demonstrating that scRNA-seq produces more zeros than expected and that this bias is greater for lower expressed genes. In addition, this missing data problem is exacerbated by the fact that this technical variation varies cell-to-cell. Then, we show how this technical cell-to-cell variability can be confused with novel biological results. Finally, we demonstrate and discuss how batch-effects and confounded experiments can intensify the problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Generating effective project scheduling heuristics by abstraction and reconstitution

    NASA Technical Reports Server (NTRS)

    Janakiraman, Bhaskar; Prieditis, Armand

    1992-01-01

    A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.

  8. Designing safety into the minimally invasive surgical revolution: a commentary based on the Jacques Perissat Lecture of the International Congress of the European Association for Endoscopic Surgery.

    PubMed

    Clarke, John R

    2009-01-01

    Surgical errors with minimally invasive surgery differ from those in open surgery. Perforations are typically the result of trocar introduction or electrosurgery. Infections include bioburdens, notably enteric viruses, on complex instruments. Retained foreign objects are primarily unretrieved device fragments and lost gallstones or other specimens. Fires and burns come from illuminated ends of fiber-optic cables and from electrosurgery. Pressure ischemia is more likely with longer endoscopic surgical procedures. Gas emboli can occur. Minimally invasive surgery is more dependent on complex equipment, with high likelihood of failures. Standardization, checklists, and problem reporting are solutions for minimizing failures. The necessity of electrosurgery makes education about best electrosurgical practices important. The recording of minimally invasive surgical procedures is an opportunity to debrief in a way that improves the reliability of future procedures. Safety depends on reliability, designing systems to withstand inevitable human errors. Safe systems are characterized by a commitment to safety, formal protocols for communications, teamwork, standardization around best practice, and reporting of problems for improvement of the system. Teamwork requires shared goals, mental models, and situational awareness in order to facilitate mutual monitoring and backup. An effective team has a flat hierarchy; team members are empowered to speak up if they are concerned about problems. Effective teams plan, rehearse, distribute the workload, and debrief. Surgeons doing minimally invasive surgery have a unique opportunity to incorporate the principles of safety into the development of their discipline.

  9. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  10. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  11. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE PAGES

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    2016-05-25

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  12. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  13. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  14. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  15. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  16. Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Matsaini; Santosa, Budi

    2018-04-01

    Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.

  17. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  18. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  19. On the thermomechanical coupling in dissipative materials: A variational approach for generalized standard materials

    NASA Astrophysics Data System (ADS)

    Bartels, A.; Bartel, T.; Canadija, M.; Mosler, J.

    2015-09-01

    This paper deals with the thermomechanical coupling in dissipative materials. The focus lies on finite strain plasticity theory and the temperature increase resulting from plastic deformation. For this type of problem, two fundamentally different modeling approaches can be found in the literature: (a) models based on thermodynamical considerations and (b) models based on the so-called Taylor-Quinney factor. While a naive straightforward implementation of thermodynamically consistent approaches usually leads to an over-prediction of the temperature increase due to plastic deformation, models relying on the Taylor-Quinney factor often violate fundamental physical principles such as the first and the second law of thermodynamics. In this paper, a thermodynamically consistent framework is elaborated which indeed allows the realistic prediction of the temperature evolution. In contrast to previously proposed frameworks, it is based on a fully three-dimensional, finite strain setting and it naturally covers coupled isotropic and kinematic hardening - also based on non-associative evolution equations. Considering a variationally consistent description based on incremental energy minimization, it is shown that the aforementioned problem (thermodynamical consistency and a realistic temperature prediction) is essentially equivalent to correctly defining the decomposition of the total energy into stored and dissipative parts. Interestingly, this decomposition shows strong analogies to the Taylor-Quinney factor. In this respect, the Taylor-Quinney factor can be well motivated from a physical point of view. Furthermore, certain intervals for this factor can be derived in order to guarantee that fundamental physically principles are fulfilled a priori. Representative examples demonstrate the predictive capabilities of the final constitutive modeling framework.

  20. The use of ion beam cleaning to obtain high quality cold welds with minimal deformation

    NASA Technical Reports Server (NTRS)

    Sater, B. L.; Moore, T. J.

    1978-01-01

    This paper describes a variation of cold welding which utilizes an ion beam to clean mating surfaces prior to joining in a vacuum environment. High quality solid state welds were produced with minimal deformation. Due to experimental fixture limitation in applying pressure work has been limited to a few low yield strength materials.

  1. Variational algorithms for nonlinear smoothing applications

    NASA Technical Reports Server (NTRS)

    Bach, R. E., Jr.

    1977-01-01

    A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.

  2. Potential effect of diaper and cotton ball contamination on NMR- and LC/MS-based metabonomics studies of urine from newborn babies.

    PubMed

    Goodpaster, Aaron M; Ramadas, Eshwar H; Kennedy, Michael A

    2011-02-01

    Nuclear magnetic resonance (NMR) and liquid chromatography/mass spectrometry (LC/MS) based metabonomics screening of urine has great potential for discovery of biomarkers for diseases that afflict newborn and preterm infants. However, urine collection from newborn infants presents a potential confounding problem due to the possibility that contaminants might leach from materials used for urine collection and influence statistical analysis of metabonomics data. In this manuscript, we have analyzed diaper and cotton ball contamination using synthetic urine to assess its potential to influence the outcome of NMR- and LC/MS-based metabonomics studies of human infant urine. Eight diaper brands were examined using the "diaper plus cotton ball" technique. Data were analyzed using conventional principal components analysis, as well as a statistical significance algorithm developed for, and applied to, NMR data. Results showed most diaper brands had distinct contaminant profiles that could potentially influence NMR- and LC/MS-based metabonomics studies. On the basis of this study, it is recommended that diaper and cotton ball brands be characterized using metabonomics methodologies prior to initiating a metabonomics study to ensure that contaminant profiles are minimal or manageable and that the same diaper and cotton ball brands be used throughout a study to minimize variation.

  3. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less

  4. Bidding strategy for microgrid in day-ahead market based on hybrid stochastic/robust optimization

    DOE PAGES

    Liu, Guodong; Xu, Yan; Tomsovic, Kevin

    2016-01-01

    In this paper, we propose an optimal bidding strategy in the day-ahead market of a microgrid consisting of intermittent distributed generation (DG), storage, dispatchable DG and price responsive loads. The microgrid coordinates the energy consumption or production of its components and trades electricity in both the day-ahead and real-time markets to minimize its operating cost as a single entity. The bidding problem is challenging due to a variety of uncertainties, including power output of intermittent DG, load variation, day-ahead and real-time market prices. A hybrid stochastic/robust optimization model is proposed to minimize the expected net cost, i.e., expected total costmore » of operation minus total benefit of demand. This formulation can be solved by mixed integer linear programming. The uncertain output of intermittent DG and day-ahead market price are modeled via scenarios based on forecast results, while a robust optimization is proposed to limit the unbalanced power in real-time market taking account of the uncertainty of real-time market price. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator, a battery and a responsive load show the advantage of stochastic optimization in addition to robust optimization.« less

  5. Total variation-based neutron computed tomography

    NASA Astrophysics Data System (ADS)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  6. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  7. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  8. Variational formulation for Black-Scholes equations in stochastic volatility models

    NASA Astrophysics Data System (ADS)

    Gyulov, Tihomir B.; Valkov, Radoslav L.

    2012-11-01

    In this note we prove existence and uniqueness of weak solutions to a boundary value problem arising from stochastic volatility models in financial mathematics. Our settings are variational in weighted Sobolev spaces. Nevertheless, as it will become apparent our variational formulation agrees well with the stochastic part of the problem.

  9. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  10. The minimal residual QR-factorization algorithm for reliably solving subset regression problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, M. H.

    1987-01-01

    A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.

  11. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  12. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  13. Singularities of the quad curl problem

    NASA Astrophysics Data System (ADS)

    Nicaise, Serge

    2018-04-01

    We consider the quad curl problem in smooth and non smooth domains of the space. We first give an augmented variational formulation equivalent to the one from [25] if the datum is divergence free. We describe the singularities of the variational space which correspond to the ones of the Maxwell system with perfectly conducting boundary conditions. The edge and corner singularities of the solution of the corresponding boundary value problem with smooth data are also characterized. We finally obtain some regularity results of the variational solution.

  14. Evolving hard problems: Generating human genetics datasets with a complex etiology.

    PubMed

    Himmelstein, Daniel S; Greene, Casey S; Moore, Jason H

    2011-07-07

    A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.

  15. HOMER: The hybrid optimization model for electric renewable

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.; Flowers, L.; Rossmann, C.

    1995-12-31

    Hybrid renewable systems are often more cost-effective than grid extensions or isolated diesel generators for providing power to remote villages. There are a wide variety of hybrid systems being developed for village applications that have differing combinations of wind, photovoltaics, batteries, and diesel generators. Due to variations in loads and resources determining the most appropriate combination of these components for a particular village is a difficult modelling task. To address this design problem the National Renewable Energy Laboratory has developed the Hybrid Optimization Model for Electric Renewables (HOMER). Existing models are either too detailed for screening analysis or too simplemore » for reliable estimation of performance. HOMER is a design optimization model that determines the configuration, dispatch, and load management strategy that minimizes life-cycle costs for a particular site and application. This paper describes the HOMER methodology and presents representative results.« less

  16. An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption

    NASA Astrophysics Data System (ADS)

    Sun, Yanhua; Hao, Zhe; Zhang, Yanhua

    2018-01-01

    With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.

  17. Ginzburg-Landau Theory for Flux Phase and Superconductivity in t-J Model

    NASA Astrophysics Data System (ADS)

    Kuboki, Kazuhiro

    2018-02-01

    Ginzburg-Landau (GL) equations and GL free energy for flux phase and superconductivity are derived microscopically from the t-J model on a square lattice. Order parameter (OP) for the flux phase has direct coupling to a magnetic field, in contrast to the superconducting OP which has minimal coupling to a vector potential. Therefore, when the flux phase OP has unidirectional spatial variation, staggered currents would flow in a perpendicular direction. The derived GL theory can be used for various problems in high-Tc cuprate superconductors, e.g., states near a surface or impurities, and the effect of an external magnetic field. Since the GL theory derived microscopically directly reflects the electronic structure of the system, e.g., the shape of the Fermi surface that changes with doping, it can provide more useful information than that from phenomenological GL theories.

  18. Tilted-axis wobbling in odd-mass nuclei

    NASA Astrophysics Data System (ADS)

    Budaca, R.

    2018-02-01

    A triaxial rotor Hamiltonian with a rigidly aligned high-j quasiparticle is treated by a time-dependent variational principle, using angular momentum coherent states. The resulting classical energy function has three unique critical points in a space of generalized conjugate coordinates, which can minimize the energy for specific ordering of the inertial parameters and a fixed angular momentum state. Because of the symmetry of the problem, there are only two unique solutions, corresponding to wobbling motion around a principal axis and, respectively, a tilted axis. The wobbling frequencies are obtained after a quantization procedure and then used to calculate E 2 and M 1 transition probabilities. The analytical results are employed in the study of the wobbling excitations of 135Pr nucleus, which is found to undergo a transition from low angular momentum transverse wobbling around a principal axis toward a tilted-axis wobbling at higher angular momentum.

  19. False positives in psychiatric diagnosis: implications for human freedom.

    PubMed

    Wakefield, Jerome C

    2010-02-01

    Current symptom-based DSM and ICD diagnostic criteria for mental disorders are prone to yielding false positives because they ignore the context of symptoms. This is often seen as a benign flaw because problems of living and emotional suffering, even if not true disorders, may benefit from support and treatment. However, diagnosis of a disorder in our society has many ramifications not only for treatment choice but for broader social reactions to the diagnosed individual. In particular, mental disorders impose a sick role on individuals and place a burden upon them to change; thus, disorders decrease the level of respect and acceptance generally accorded to those with even annoying normal variations in traits and features. Thus, minimizing false positives is important to a pluralistic society. The harmful dysfunction analysis of disorder is used to diagnose the sources of likely false positives, and propose potential remedies to the current weaknesses in the validity of diagnostic criteria.

  20. Determination of Caffeine and Other Purine Compounds in Food and Pharmaceuitcals by Micellar Electrokinetic Chrmoatography

    NASA Astrophysics Data System (ADS)

    Vogt, Carla; Contradi, S.; Rohde, E.

    1997-09-01

    Capillary elctrophoresis is a modern separation technique, especially the extremely high efficiencies and minimal requirements with regard to buffers, samples and solvents lead to a dramatic increase of applications in the last few years. This paper offers an introduction to the technique of micellar elektrokinetic chromatography as a special kind of capillary electrophoresis. Caffeine and other purine compounds have been determined in foodstuff (tea, coffee, cocoa) as well as in pharmaceutical formulations. Different sample preparation procedures which have been developed with regard to the special properties of the sample matrices are discussed in the paper.This preparation facilitates the separation in many cases. So students have to solve a relatively simple separation problem by variation of buffer pH, buffer components and separation parameters. By doing a calibration for the analyzed purine compounds they will learn about reproducibility in capillary electrophoresis.

  1. Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-10-15

    The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less

  2. Geothermal Energy: Prospects and Problems

    ERIC Educational Resources Information Center

    Ritter, William W.

    1973-01-01

    An examination of geothermal energy as a means of increasing the United States power resources with minimal pollution problems. Developed and planned geothermal-electric power installations around the world, capacities, installation dates, etc., are reviewed. Environmental impact, problems, etc. are discussed. (LK)

  3. Abstract generalized vector quasi-equilibrium problems in noncompact Hadamard manifolds.

    PubMed

    Lu, Haishu; Wang, Zhihua

    2017-01-01

    This paper deals with the abstract generalized vector quasi-equilibrium problem in noncompact Hadamard manifolds. We prove the existence of solutions to the abstract generalized vector quasi-equilibrium problem under suitable conditions and provide applications to an abstract vector quasi-equilibrium problem, a generalized scalar equilibrium problem, a scalar equilibrium problem, and a perturbed saddle point problem. Finally, as an application of the existence of solutions to the generalized scalar equilibrium problem, we obtain a weakly mixed variational inequality and two mixed variational inequalities. The results presented in this paper unify and generalize many known results in the literature.

  4. Relativized problems with abelian phase group in topological dynamics.

    PubMed

    McMahon, D

    1976-04-01

    Let (X, T) be the equicontinuous minimal transformation group with X = pi(infinity)Z(2), the Cantor group, and S = [unk](infinity)Z(2) endowed with the discrete topology acting on X by right multiplication. For any countable group T we construct a function F:X x S --> T such that if (Y, T) is a minimal transformation group, then (X x Y, S) is a minimal transformation group with the action defined by (x, y)s = [xs, yF(x, s)]. If (W, T) is a minimal transformation group and varphi:(Y, T) --> (W, T) is a homomorphism, then identity x varphi:(X x Y, S) --> (X x W, S) is a homomorphism and has many of the same properties that varphi has. For this reason, one may assume that the phase group is abelian (or S) without loss of generality for many relativized problems in topological dynamics.

  5. Limit behavior of mass critical Hartree minimization problems with steep potential wells

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Luo, Yong; Wang, Zhi-Qiang

    2018-06-01

    We consider minimizers of the following mass critical Hartree minimization problem: eλ(N ) ≔inf {u ∈H1(Rd ) , ‖u‖2 2=N } Eλ(u ) , where d ≥ 3, λ > 0, and the Hartree energy functional Eλ(u) is defined by Eλ(u ) ≔∫Rd|∇u (x ) |2d x +λ ∫Rdg (x ) u2(x ) d x -1/2 ∫Rd∫Rdu/2(x ) u2(y ) |x -y |2 d x d y . Here the steep potential g(x) satisfies 0 =g (0 ) =infRdg (x ) ≤g (x ) ≤1 and 1 -g (x ) ∈Ld/2(Rd ) . We prove that there exists a constant N* > 0, independent of λg(x), such that if N ≥ N*, then eλ(N) does not admit minimizers for any λ > 0; if 0 < N < N*, then there exists a constant λ*(N) > 0 such that eλ(N) admits minimizers for any λ > λ*(N) and eλ(N) does not admit minimizers for 0 < λ < λ*(N). For any given 0 < N < N*, the limit behavior of positive minimizers for eλ(N) is also studied as λ → ∞, where the mass concentrates at the bottom of g(x).

  6. Implications of random variation in the Stand Prognosis Model

    Treesearch

    David A. Hamilton

    1991-01-01

    Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...

  7. A Variational Property of the Velocity Distribution in a System of Material Particles

    ERIC Educational Resources Information Center

    Siboni, S.

    2009-01-01

    A simple variational property concerning the velocity distribution of a set of point particles is illustrated. This property provides a full characterization of the velocity distribution which minimizes the kinetic energy of the system for prescribed values of linear and angular momentum. Such a characterization is applied to discuss the kinetic…

  8. Microtropography and water table fluctuation in a sphagnum mire

    Treesearch

    E.S. Verry

    1984-01-01

    A detailed organic soil profile description, 22 years of continuous water table records, and a hummock-hollow level survey were examined at a small Minnesota mire (a bog with remnants of poor fen vegetation). Variation in the level survey suggests that hollows be used to minimize variation when detailed topographic information is needed and to match profile...

  9. Diving wedges

    NASA Astrophysics Data System (ADS)

    Vincent, Lionel; Kanso, Eva

    2017-11-01

    Diving induces large pressures during water entry, accompanied by the creation of cavity behind the diver and water splash ejected from the free water surface. To minimize impact forces, divers streamline their shape at impact. Here, we investigate the impact forces and splash evolution of diving wedges as a function of the wedge opening angle. A gradual transition from impactful to smooth entry is observed as the wedge angle decreases. After submersion, diving wedges experience significantly smaller drag forces (two-fold smaller) than immersed wedges. We characterize the shapes of the cavity and splash created by the wedge and find that they are independent of the entry velocity at short times, but that the splash exhibits distinct variations in shape at later times. Combining experimental approach and a discrete fluid particle model, we show that the splash shape is governed by a destabilizing Venturi-suction force due to air rushing between the splash and the water surface and a stabilizing force due to surface tension. These findings may have implications in a wide range of water entry problems, with applications in engineering and bio-related problems, including naval engineering, disease spreading and platform diving. This work was funded by the National Science Foundation.

  10. Microprobes For Blood Flow Measurements In Tissue And Small Vessels

    NASA Astrophysics Data System (ADS)

    Oberg, P. A.; Salerud, E. G.

    1988-04-01

    Laser Doppler flowmetry is a method for the continuous and non-invasive recording of tissue blood flow. The method has already proved to be advantageous in a number of clinical as well as theoretical medical disciplines. In dermatology, plastic- and gastrointestinal surgery laser Doppler measurements have substantially contributed to increase knowledge of microvascular perfusion. In experimental medicine, the method has been used in the study of a great variety of microvascular problems. Spontaneous rhythmical variations, spatial and temporal fluctuations in human skin blood flow are mentioned as examples of problem areas in which new knowledge has been generated. The method has facilitated further investigations of the nature of spongeous bone blood flow, testis and kidney cortex blood flow. Recently we have showed that a variant of the laser Doppler method principle, using a single optical fiber, can be advantageous in deep tissue measurements. With this method laser light is transmitted bidirectionally in a single fiber. The tissue trauma which affects blood flow can be minimized by introducing small diameter fibers (0.1-0.5 mm). A special set-up utilizing the same basic principle has been used for the recording of blood flow in small vessels.

  11. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  12. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  13. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  14. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  15. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  16. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  17. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  18. Investigating the Conceptual Variation of Major Physics Textbooks

    NASA Astrophysics Data System (ADS)

    Stewart, John; Campbell, Richard; Clanton, Jessica

    2008-04-01

    The conceptual problem content of the electricity and magnetism chapters of seven major physics textbooks was investigated. The textbooks presented a total of 1600 conceptual electricity and magnetism problems. The solution to each problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content among the set of topics common to the texts. The variation of the distribution of conceptual coverage within each text is studied. The variation between the major groupings of the textbooks (conceptual, algebra-based, and calculus-based) is also studied. A measure of the conceptual complexity of the problems in each text is presented.

  19. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  20. Flattening the inflaton potential beyond minimal gravity

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Min

    2018-01-01

    We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  1. Minimally conscious state or cortically mediated state?

    PubMed

    Naccache, Lionel

    2018-04-01

    Durable impairments of consciousness are currently classified in three main neurological categories: comatose state, vegetative state (also recently coined unresponsive wakefulness syndrome) and minimally conscious state. While the introduction of minimally conscious state, in 2002, was a major progress to help clinicians recognize complex non-reflexive behaviours in the absence of functional communication, it raises several problems. The most important issue related to minimally conscious state lies in its criteria: while behavioural definition of minimally conscious state lacks any direct evidence of patient's conscious content or conscious state, it includes the adjective 'conscious'. I discuss this major problem in this review and propose a novel interpretation of minimally conscious state: its criteria do not inform us about the potential residual consciousness of patients, but they do inform us with certainty about the presence of a cortically mediated state. Based on this constructive criticism review, I suggest three proposals aiming at improving the way we describe the subjective and cognitive state of non-communicating patients. In particular, I present a tentative new classification of impairments of consciousness that combines behavioural evidence with functional brain imaging data, in order to probe directly and univocally residual conscious processes.

  2. Design and optimal control of multi-spacecraft interferometric imaging systems

    NASA Astrophysics Data System (ADS)

    Chakravorty, Suman

    The objective of the proposed NASA Origins mission, Planet Imager, is the high-resolution imaging of exo-solar planets and similar high resolution astronomical imaging applications. The imaging is to be accomplished through the design of multi-spacecraft interferometric imaging systems (MSIIS). In this dissertation, we study the design of MSIIS. Assuming that the ultimate goal of imaging is the correct classification of the formed images, we formulate the design problem as minimization of some resource utilization of the system subject to the constraint that the probability of misclassification of any given image is below a pre-specified level. We model the process of image formation in an MSIIS and show that the Modulation Transfer function of and the noise corrupting the synthesized optical instrument are dependent on the trajectories of the constituent spacecraft. Assuming that the final goal of imaging is the correct classification of the formed image based on a given feature (a real valued function of the image variable), and a threshold on the feature, we find conditions on the noise corrupting the measurements such that the probability of misclassification is below some pre-specified level. These conditions translate into constraints on the trajectories of the constituent spacecraft. Thus, the design problem reduces to minimizing some resource utilization of the system, while satisfying the constraints placed on the system by the imaging requirements. We study the problem of designing minimum time maneuvers for MSIIS. We transform the time minimization problem into a "painting problem". The painting problem involves painting a large disk with smaller paintbrushes (coverage disks). We show that spirals form the dominant set for the solution to the painting problem. We frame the time minimization in the subspace of spirals and obtain a bilinear program, the double pantograph problem, in the design parameters of the spiral, the spiraling rate and the angular rate. We show that the solution of this problem is given by the solution to two associated linear programs. We illustrate our results through a simulation where the banded appearance of a fictitious exo-solar planet at a distance of 8 parsecs is detected.

  3. Finite element analysis of time-independent superconductivity. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Schuler, James J.

    1993-01-01

    The development of electromagnetic (EM) finite elements based upon a generalized four-potential variational principle is presented. The use of the four-potential variational principle allows for downstream coupling of EM fields with the thermal, mechanical, and quantum effects exhibited by superconducting materials. The use of variational methods to model an EM system allows for a greater range of applications than just the superconducting problem. The four-potential variational principle can be used to solve a broader range of EM problems than any of the currently available formulations. It also reduces the number of independent variables from six to four while easily dealing with conductor/insulator interfaces. This methodology was applied to a range of EM field problems. Results from all these problems predict EM quantities exceptionally well and are consistent with the expected physical behavior.

  4. Age-related changes in strategic variations during arithmetic problem solving: The role of executive control.

    PubMed

    Hinault, T; Lemaire, P

    2016-01-01

    In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.

  5. What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective

    ERIC Educational Resources Information Center

    Wareham, Todd; Evans, Patricia; van Rooij, Iris

    2011-01-01

    Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…

  6. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  7. Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution

    PubMed Central

    Wang, Dafang; Kirby, Robert M.; MacLeod, Rob S.; Johnson, Chris R.

    2013-01-01

    With the goal of non-invasively localizing cardiac ischemic disease using body-surface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem’s specific structure. Our simulations used realistic, fiber-included heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization. PMID:23913980

  8. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  9. Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra

    The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.

  10. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  11. Variational Problems with Long-Range Interaction

    NASA Astrophysics Data System (ADS)

    Soave, Nicola; Tavares, Hugo; Terracini, Susanna; Zilio, Alessandro

    2018-06-01

    We consider a class of variational problems for densities that repel each other at a distance. Typical examples are given by the Dirichlet functional and the Rayleigh functional D(u) = \\sum_{i=1}^k \\int_{Ω} |\

  12. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  13. Graphical approach for multiple values logic minimization

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.

    1999-03-01

    Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.

  14. Collective intelligence for control of distributed dynamical systems

    NASA Astrophysics Data System (ADS)

    Wolpert, D. H.; Wheeler, K. R.; Tumer, K.

    2000-03-01

    We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, The American Economic Review, 84 (1994) 406; D. Challet and Y. C. Zhang, Physica A, 256 (1998) 514). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not "work at cross purposes", in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.

  15. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  16. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  17. The Global Survey Method Applied to Ground-level Cosmic Ray Measurements

    NASA Astrophysics Data System (ADS)

    Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.

    2018-04-01

    The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.

  18. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  19. Relaxations to Sparse Optimization Problems and Applications

    NASA Astrophysics Data System (ADS)

    Skau, Erik West

    Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.

  20. Optimal UAS Assignments and Trajectories for Persistent Surveillance and Data Collection from a Wireless Sensor Network

    DTIC Science & Technology

    2015-12-24

    minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution

  1. Minimal models of compact symplectic semitoric manifolds

    NASA Astrophysics Data System (ADS)

    Kane, D. M.; Palmer, J.; Pelayo, Á.

    2018-02-01

    A symplectic semitoric manifold is a symplectic 4-manifold endowed with a Hamiltonian (S1 × R) -action satisfying certain conditions. The goal of this paper is to construct a new symplectic invariant of symplectic semitoric manifolds, the helix, and give applications. The helix is a symplectic analogue of the fan of a nonsingular complete toric variety in algebraic geometry, that takes into account the effects of the monodromy near focus-focus singularities. We give two applications of the helix: first, we use it to give a classification of the minimal models of symplectic semitoric manifolds, where "minimal" is in the sense of not admitting any blowdowns. The second application is an extension to the compact case of a well known result of Vũ Ngọc about the constraints posed on a symplectic semitoric manifold by the existence of focus-focus singularities. The helix permits to translate a symplectic geometric problem into an algebraic problem, and the paper describes a method to solve this type of algebraic problem.

  2. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  3. Free-energy minimization and the dark-room problem.

    PubMed

    Friston, Karl; Thornton, Christopher; Clark, Andy

    2012-01-01

    Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the "free-energy minimization" formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b - see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the "Dark-Room Problem." Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington's Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).

  4. Design of experiments on 135 cloned poplar trees to map environmental influence in greenhouse.

    PubMed

    Pinto, Rui Climaco; Stenlund, Hans; Hertzberg, Magnus; Lundstedt, Torbjörn; Johansson, Erik; Trygg, Johan

    2011-01-31

    To find and ascertain phenotypic differences, minimal variation between biological replicates is always desired. Variation between the replicates can originate from genetic transformation but also from environmental effects in the greenhouse. Design of experiments (DoE) has been used in field trials for many years and proven its value but is underused within functional genomics including greenhouse experiments. We propose a strategy to estimate the effect of environmental factors with the ultimate goal of minimizing variation between biological replicates, based on DoE. DoE can be analyzed in many ways. We present a graphical solution together with solutions based on classical statistics as well as the newly developed OPLS methodology. In this study, we used DoE to evaluate the influence of plant specific factors (plant size, shoot type, plant quality, and amount of fertilizer) and rotation of plant positions on height and section area of 135 cloned wild type poplar trees grown in the greenhouse. Statistical analysis revealed that plant position was the main contributor to variability among biological replicates and applying a plant rotation scheme could reduce this variation. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. A variational reconstruction method for undersampled dynamic x-ray tomography based on physical motion models

    NASA Astrophysics Data System (ADS)

    Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli

    2017-12-01

    In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.

  6. Power allocation for SWIPT in K-user interference channels using game theory

    NASA Astrophysics Data System (ADS)

    Wen, Zhigang; Liu, Ying; Liu, Xiaoqing; Li, Shan; Chen, Xianya

    2018-12-01

    A simultaneous wireless information and power transfer system in interference channels of multi-users is considered. In this system, each transmitter sends one data stream to its targeted receiver, which causes interference to other receivers. Since all transmitter-receiver links want to maximize their own average transmission rate, a power allocation problem under the transmit power constraints and the energy-harvesting constraints is developed. To solve this problem, we propose a game theory framework. Then, we convert the game into a variational inequalities problem by establishing the connection between game theory and variational inequalities and solve the variational inequalities problem. Through theoretical analysis, the existence and uniqueness of Nash equilibrium are both guaranteed by the theory of variational inequalities. A distributed iterative alternating optimization water-filling algorithm is derived, which is proved to converge. Numerical results show that the proposed algorithm reaches fast convergence and achieves a higher sum rate than the unaided scheme.

  7. The Parisi Formula has a Unique Minimizer

    NASA Astrophysics Data System (ADS)

    Auffinger, Antonio; Chen, Wei-Kuo

    2015-05-01

    In 1979, Parisi (Phys Rev Lett 43:1754-1756, 1979) predicted a variational formula for the thermodynamic limit of the free energy in the Sherrington-Kirkpatrick model, and described the role played by its minimizer. This formula was verified in the seminal work of Talagrand (Ann Math 163(1):221-263, 2006) and later generalized to the mixed p-spin models by Panchenko (Ann Probab 42(3):946-958, 2014). In this paper, we prove that the minimizer in Parisi's formula is unique at any temperature and external field by establishing the strict convexity of the Parisi functional.

  8. Conversion of laser energy to gas kinetic energy

    NASA Technical Reports Server (NTRS)

    Caledonia, G. E.

    1976-01-01

    Techniques for the gas phase absorption of laser radiation for ultimate conversion to gas kinetic energy are discussed. Particular emphasis is placed on absorption by the vibration rotation bands of diatomic molecules at high pressures. This high pressure absorption appears to offer efficient conversion of laser energy to gas translational energy. Bleaching and chemical effects are minimized and the variation of the total absorption coefficient with temperature is minimal.

  9. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  10. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  11. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  12. Heel impact forces during barefoot versus minimally shod walking among Tarahumara subsistence farmers and urban Americans

    PubMed Central

    Koch, Elizabeth; Holowka, Nicholas B.; Lieberman, Daniel E.

    2018-01-01

    Despite substantial recent interest in walking barefoot and in minimal footwear, little is known about potential differences in walking biomechanics when unshod versus minimally shod. To test the hypothesis that heel impact forces are similar during barefoot and minimally shod walking, we analysed ground reaction forces recorded in both conditions with a pedography platform among indigenous subsistence farmers, the Tarahumara of Mexico, who habitually wear minimal sandals, as well as among urban Americans wearing commercially available minimal sandals. Among both the Tarahumara (n = 35) and Americans (n = 30), impact peaks generated in sandals had significantly (p < 0.05) higher force magnitudes, slower loading rates and larger vertical impulses than during barefoot walking. These kinetic differences were partly due to individuals' significantly greater effective mass when walking in sandals. Our results indicate that, in general, people tread more lightly when walking barefoot than in minimal footwear. Further research is needed to test if the variations in impact peaks generated by walking barefoot or in minimal shoes have consequences for musculoskeletal health. PMID:29657826

  13. Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  14. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  15. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  16. A framework for estimating potential fluid flow from digital imagery

    NASA Astrophysics Data System (ADS)

    Luttman, Aaron; Bollt, Erik M.; Basnayake, Ranil; Kramer, Sean; Tufillaro, Nicholas B.

    2013-09-01

    Given image data of a fluid flow, the flow field, ⟨u,v⟩, governing the evolution of the system can be estimated using a variational approach to optical flow. Assuming that the flow field governing the advection is the symplectic gradient of a stream function or the gradient of a potential function—both falling under the category of a potential flow—it is natural to re-frame the optical flow problem to reconstruct the stream or potential function directly rather than the components of the flow individually. There are several advantages to this framework. Minimizing a functional based on the stream or potential function rather than based on the components of the flow will ensure that the computed flow is a potential flow. Next, this approach allows a more natural method for imposing scientific priors on the computed flow, via regularization of the optical flow functional. Also, this paradigm shift gives a framework—rather than an algorithm—and can be applied to nearly any existing variational optical flow technique. In this work, we develop the mathematical formulation of the potential optical flow framework and demonstrate the technique on synthetic flows that represent important dynamics for mass transport in fluid flows, as well as a flow generated by a satellite data-verified ocean model of temperature transport.

  17. What energy functions can be minimized via graph cuts?

    PubMed

    Kolmogorov, Vladimir; Zabih, Ramin

    2004-02-01

    In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.

  18. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 2: Derivation of finite-element equations and comparisons with analytical solutions

    USGS Publications Warehouse

    Cooley, Richard L.

    1992-01-01

    MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.

  19. A variational theorem for creep with applications to plates and columns

    NASA Technical Reports Server (NTRS)

    Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R

    1958-01-01

    A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.

  20. Variational Methods For Sloshing Problems With Surface Tension

    NASA Astrophysics Data System (ADS)

    Tan, Chee Han; Carlson, Max; Hohenegger, Christel; Osting, Braxton

    2016-11-01

    We consider the sloshing problem for an incompressible, inviscid, irrotational fluid in a container, including effects due to surface tension on the free surface. We restrict ourselves to a constant contact angle and we seek time-harmonic solutions of the linearized problem, which describes the time-evolution of the fluid due to a small initial disturbance of the surface at rest. As opposed to the zero surface tension case, where the problem reduces to a partial differential equation for the velocity potential, we obtain a coupled system for the velocity potential and the free surface displacement. We derive a new variational formulation of the coupled problem and establish the existence of solutions using the direct method from the Calculus of Variations. In the limit of zero surface tension, we recover the variational formulation of the classical Steklov eigenvalue problem, as derived by B. A. Troesch. For the particular case of an axially symmetric container, we propose a finite element numerical method for computing the sloshing modes of the coupled system. The scheme is implemented in FEniCS and we obtain a qualitative description of the effect of surface tension on the sloshing modes.

  1. Variational approach to direct and inverse problems of atmospheric pollution studies

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2016-04-01

    We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X

  2. Planification de la maintenance d'un parc de turbines-alternateurs par programmation mathematique

    NASA Astrophysics Data System (ADS)

    Aoudjit, Hakim

    A growing number of Hydro-Quebec's hydro generators are at the end of their useful life and maintenance managers fear to face a number of overhauls exceeding what can be handled. Maintenance crews and budgets are limited and these withdrawals may take up to a full year and mobilize significant resources in addition to the loss of electricity production. In addition, increased export sales forecasts and severe production patterns are expected to speed up wear that can lead to halting many units at the same time. Currently, expert judgment is at the heart of withdrawals which rely primarily on periodic inspections and in-situ measurements and the results are sent to the maintenance planning team who coordinate all the withdrawals decisions. The degradations phenomena taking place is random in nature and the prediction capability of wear using only inspections is limited to short-term at best. A long term planning of major overhauls is sought by managers for the sake of justifying and rationalizing budgets and resources. The maintenance managers are able to provide a huge amount of data. Among them, is the hourly production of each unit for several years, the repairs history on each part of a unit as well as major withdrawals since the 1950's. In this research, we tackle the problem of long term maintenance planning for a fleet of 90 hydro generators at Hydro-Quebec over a 50 years planning horizon period. We lay a scientific and rational framework to support withdrawals decisions by using part of the available data and maintenance history while fulfilling a set of technical and economic constraints. We propose a planning approach based on a constrained optimization framework. We begin by decomposing and sorting hydro generator components to highlight the most influential parts. A failure rate model is developed to take into account the technical characteristics and unit utilization. Then, replacement and repair policies are evaluated for each of the components then strategies are derived for the whole unit. Traditional univariate policies such as the age replacement policy and the minimal repair policy are calculated. These policies are extended to build alternative bivariate maintenance policy as well as a repair strategy where the state of a component after a repair is rejuvenated by a constant coefficient. These templates form the basis for the calculation of objective function for the scheduling problem. On one hand, this issue is treated as a nonlinear problem where the objective is to minimize the average total maintenance cost per unit of time on an infinite horizon for the fleet with technical and economic constraints. A formulation is also proposed in the case of a finite time horizon. In the event of electricity production variation, and given that the usage profile is known, the influence of production scenarios is reflected on the unit's components through their failure rate. In this context, prognoses on possible resources problems are made by studying the characteristics of the generated plans. On the second hand, the withdrawals are now subjected to two decision criteria. In addition to minimizing the average total maintenance cost per unit of time on an infinite time horizon, the best achievable reliability of remaining turbo generators is sought. This problem is treated as a biobjective nonlinear optimization problem. Finally a series of problems describing multiple contexts are solved for planning renovations of 90 turbo generators units considering 3 major components in each unit and 2 types of maintenance policies for each component.

  3. The relaxed-polar mechanism of locally optimal Cosserat rotations for an idealized nanoindentation and comparison with 3D-EBSD experiments

    NASA Astrophysics Data System (ADS)

    Fischle, Andreas; Neff, Patrizio; Raabe, Dierk

    2017-08-01

    The rotation {{polar}}(F) \\in {{SO}}(3) arises as the unique orthogonal factor of the right polar decomposition F = {{polar}}(F) U of a given invertible matrix F \\in {{GL}}^+(3). In the context of nonlinear elasticity Grioli (Boll Un Math Ital 2:252-255, 1940) discovered a geometric variational characterization of {{polar}}(F) as a unique energy-minimizing rotation. In preceding works, we have analyzed a generalization of Grioli's variational approach with weights (material parameters) μ > 0 and μ _c ≥ 0 (Grioli: μ = μ _c). The energy subject to minimization coincides with the Cosserat shear-stretch contribution arising in any geometrically nonlinear, isotropic and quadratic Cosserat continuum model formulated in the deformation gradient field F :=\

  4. Processing Optimization of Deformed Plain Woven Thermoplastic Composites

    NASA Astrophysics Data System (ADS)

    Smith, John R.; Vaidya, Uday K.

    2013-12-01

    This research addresses the processing optimization of post-manufactured, plain weave architecture composite panels consisted of four glass layers and thermoplastic polyurethane (TPU) when formed with only localized heating. Often times, during the production of deep drawn composite parts, a fabric preform experiences various defects, including non-isothermal heating and thickness variations. Minimizing these defects is of utmost importance for mass produceability in a practical manufacturing process. The broad objective of this research was to implement a design of experiments approach to minimize through-thickness composite panel variation during manufacturing by varying the heating time, the temperature of heated components and the clamping pressure. It was concluded that the heated tooling with least area contact was most influential, followed by the length of heating time and the amount of clamping pressure.

  5. Nonlinear refraction and reflection travel time tomography

    USGS Publications Warehouse

    Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.

    1998-01-01

    We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.

  6. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties

    PubMed Central

    Shi, Qi; Sun, Nanbo; Sun, Tao; Wang, Jing; Tan, Shan

    2016-01-01

    The exposure of normal tissues to high radiation during cone-beam CT (CBCT) imaging increases the risk of cancer and genetic defects. Statistical iterative algorithms with the total variation (TV) penalty have been widely used for low dose CBCT reconstruction, with state-of-the-art performance in suppressing noise and preserving edges. However, TV is a first-order penalty and sometimes leads to the so-called staircase effect, particularly over regions with smooth intensity transition in the reconstruction images. A second-order penalty known as the Hessian penalty was recently used to replace TV to suppress the staircase effect in CBCT reconstruction at the cost of slightly blurring object edges. In this study, we proposed a new penalty, the TV-H, which combines TV and Hessian penalties for CBCT reconstruction in a structure-adaptive way. The TV-H penalty automatically differentiates the edges, gradual transition and uniform local regions within an image using the voxel gradient, and adaptively weights TV and Hessian according to the local image structures in the reconstruction process. Our proposed penalty retains the benefits of TV, including noise suppression and edge preservation. It also maintains the structures in regions with gradual intensity transition more successfully. A majorization-minimization (MM) approach was designed to optimize the objective energy function constructed with the TV-H penalty. The MM approach employed a quadratic upper bound of the original objective function, and the original optimization problem was changed to a series of quadratic optimization problems, which could be efficiently solved using the Gauss-Seidel update strategy. We tested the reconstruction algorithm on two simulated digital phantoms and two physical phantoms. Our experiments indicated that the TV-H penalty visually and quantitatively outperformed both TV and Hessian penalties. PMID:27699100

  7. Structure-adaptive CBCT reconstruction using weighted total variation and Hessian penalties.

    PubMed

    Shi, Qi; Sun, Nanbo; Sun, Tao; Wang, Jing; Tan, Shan

    2016-09-01

    The exposure of normal tissues to high radiation during cone-beam CT (CBCT) imaging increases the risk of cancer and genetic defects. Statistical iterative algorithms with the total variation (TV) penalty have been widely used for low dose CBCT reconstruction, with state-of-the-art performance in suppressing noise and preserving edges. However, TV is a first-order penalty and sometimes leads to the so-called staircase effect, particularly over regions with smooth intensity transition in the reconstruction images. A second-order penalty known as the Hessian penalty was recently used to replace TV to suppress the staircase effect in CBCT reconstruction at the cost of slightly blurring object edges. In this study, we proposed a new penalty, the TV-H, which combines TV and Hessian penalties for CBCT reconstruction in a structure-adaptive way. The TV-H penalty automatically differentiates the edges, gradual transition and uniform local regions within an image using the voxel gradient, and adaptively weights TV and Hessian according to the local image structures in the reconstruction process. Our proposed penalty retains the benefits of TV, including noise suppression and edge preservation. It also maintains the structures in regions with gradual intensity transition more successfully. A majorization-minimization (MM) approach was designed to optimize the objective energy function constructed with the TV-H penalty. The MM approach employed a quadratic upper bound of the original objective function, and the original optimization problem was changed to a series of quadratic optimization problems, which could be efficiently solved using the Gauss-Seidel update strategy. We tested the reconstruction algorithm on two simulated digital phantoms and two physical phantoms. Our experiments indicated that the TV-H penalty visually and quantitatively outperformed both TV and Hessian penalties.

  8. Effect of Causal Stories in Solving Mathematical Story Problems

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Gerretson, Helen; Olkun, Sinan; Joutsenlahti, Jorma

    2010-01-01

    This study investigated whether infusing "causal" story elements into mathematical word problems improves student performance. In one experiment in the USA and a second in USA, Finland and Turkey, undergraduate elementary education majors worked word problems in three formats: 1) standard (minimal verbiage), 2) potential causation…

  9. Mathematics Competency for Beginning Chemistry Students Through Dimensional Analysis.

    PubMed

    Pursell, David P; Forlemu, Neville Y; Anagho, Leonard E

    2017-01-01

    Mathematics competency in nursing education and practice may be addressed by an instructional variation of the traditional dimensional analysis technique typically presented in beginning chemistry courses. The authors studied 73 beginning chemistry students using the typical dimensional analysis technique and the variation technique. Student quantitative problem-solving performance was evaluated. Students using the variation technique scored significantly better (18.3 of 20 points, p < .0001) on the final examination quantitative titration problem than those who used the typical technique (10.9 of 20 points). American Chemical Society examination scores and in-house assessment indicate that better performing beginning chemistry students were more likely to use the variation technique rather than the typical technique. The variation technique may be useful as an alternative instructional approach to enhance beginning chemistry students' mathematics competency and problem-solving ability in both education and practice. [J Nurs Educ. 2017;56(1):22-26.]. Copyright 2017, SLACK Incorporated.

  10. [The present and future state of minimized extracorporeal circulation].

    PubMed

    Meng, Fan; Yang, Ming

    2013-05-01

    Minimized extracorporeal circulation improved in the postoperative side effects of conventional extracorporeal circulation is a kind of new extracorporeal circulation. This paper introduces the principle, characteristics, applications and related research of minimized extracorporeal circulation. For the problems of systemic inflammatory response syndrome and limited assist time, the article proposes three development direction including system miniaturization and integration, pulsatile blood pump and the adaptive control by human parameter identification.

  11. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.

    PubMed

    Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen

    2017-09-04

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  12. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    PubMed

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  13. Geopolymer for protective coating of transportation infrastructures.

    DOT National Transportation Integrated Search

    1998-09-01

    Surface deterioration of exposed transportation structures is a major problem. In most cases, : surface deterioration could lead to structural problems because of the loss of cover and ensuing : reinforcement corrosion. To minimize the deterioration,...

  14. Calibrating the Spatiotemporal Root Density Distribution for Macroscopic Water Uptake Models Using Tikhonov Regularization

    NASA Astrophysics Data System (ADS)

    Li, N.; Yue, X. Y.

    2018-03-01

    Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.

  15. Continued research on selected parameters to minimize community annoyance from airplane noise

    NASA Technical Reports Server (NTRS)

    Frair, L.

    1981-01-01

    Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.

  16. NUMERICAL ANALYSES FOR TREATING DIFFUSION IN SINGLE-, TWO-, AND THREE-PHASE BINARY ALLOY SYSTEMS

    NASA Technical Reports Server (NTRS)

    Tenney, D. R.

    1994-01-01

    This package consists of a series of three computer programs for treating one-dimensional transient diffusion problems in single and multiple phase binary alloy systems. An accurate understanding of the diffusion process is important in the development and production of binary alloys. Previous solutions of the diffusion equations were highly restricted in their scope and application. The finite-difference solutions developed for this package are applicable for planar, cylindrical, and spherical geometries with any diffusion-zone size and any continuous variation of the diffusion coefficient with concentration. Special techniques were included to account for differences in modal volumes, initiation and growth of an intermediate phase, disappearance of a phase, and the presence of an initial composition profile in the specimen. In each analysis, an effort was made to achieve good accuracy while minimizing computation time. The solutions to the diffusion equations for single-, two-, and threephase binary alloy systems are numerically calculated by the three programs NAD1, NAD2, and NAD3. NAD1 treats the diffusion between pure metals which belong to a single-phase system. Diffusion in this system is described by a one-dimensional Fick's second law and will result in a continuous composition variation. For computational purposes, Fick's second law is expressed as an explicit second-order finite difference equation. Finite difference calculations are made by choosing the grid spacing small enough to give convergent solutions of acceptable accuracy. NAD2 treats diffusion between pure metals which form a two-phase system. Diffusion in the twophase system is described by two partial differential equations (a Fick's second law for each phase) and an interface-flux-balance equation which describes the location of the interface. Actual interface motion is obtained by a mass conservation procedure. To account for changes in the thicknesses of the two phases as diffusion progresses, a variable grid technique developed by Murray and Landis is employed. These equations are expressed in finite difference form and solved numerically. Program NAD3 treats diffusion between pure metals which form a two-phase system with an intermediate third phase. Diffusion in the three-phase system is described by three partial differential expressions of Fick's second law and two interface-flux-balance equations. As with the two-phase case, a variable grid finite difference is used to numerically solve the diffusion equations. Computation time is minimized without sacrificing solution accuracy by treating the three-phase problem as a two-phase problem when the thickness of the intermediate phase is less than a preset value. Comparisons between these programs and other solutions have shown excellent agreement. The programs are written in FORTRAN IV for batch execution on the CDC 6600 with a central memory requirement of approximately 51K (octal) 60 bit words.

  17. A bottom-up approach to the strong CP problem

    NASA Astrophysics Data System (ADS)

    Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.

    2018-05-01

    The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.

  18. Resist heating effect on e-beam mask writing at 75 kV and 60 A/cm2

    NASA Astrophysics Data System (ADS)

    Benes, Zdenek; Deverich, Christina; Huang, Chester; Lawliss, Mark

    2003-12-01

    Resist heating has been known to be one of the main contributors to local CD variation in mask patterning using variable shape e-beam tools. Increasingly complex mask patterns require increased number of shapes which drives the need for higher electron beam current densities to maintain reasonable write times. As beam current density is increased, CD error resulting from resist heating may become a dominating contributor to local CD variations. In this experimental study, the IBM EL4+ mask writer with high voltage and high current density has been used to quantitatively investigate the effect of resist heating on the local CD uniformity. ZEP 7000 and several chemically amplified resists have been evaluated under various exposure conditions (single-pass, multi-pass, variable spot size) and pattern densities. Patterns were designed specifically to allow easy measurement of local CD variations with write strategies designed to maximize the effect of resist heating. Local CD variations as high as 15 nm in 18.75 × 18.75 μm sub-field size have been observed for ZEP 7000 in a single-pass writing with full 1000 nm spots at 50% pattern density. This number can be reduced by increasing the number of passes or by decreasing the maximum spot size. The local CD variation has been reduced to as low as 2 nm for ZEP 7000 for the same pattern under modified exposure conditions. The effectiveness of various writing strategies is discussed as well as their possible deficiencies. Minimal or no resist heating effects have been observed for the chemically amplified resists studied. The results suggest that the resist heating effect can be well controlled by careful selection of the resist/process system and/or writing strategy and that resist heating does not have to pose a problem for high throughput e-beam mask making that requires high voltage and high current densities.

  19. Null Angular Momentum and Weak KAM Solutions of the Newtonian N-Body Problem

    NASA Astrophysics Data System (ADS)

    Percino-Figueroa, Boris A.

    2017-08-01

    In [Arch. Ration. Mech. Anal. 213 (2014), 981-991] it has been proved that in the Newtonian N-body problem, given a minimal central configuration a and an arbitrary configuration x, there exists a completely parabolic orbit starting on x and asymptotic to the homothetic parabolic motion of a, furthermore such an orbit is a free time minimizer of the action functional. In this article we extend this result in abundance of completely parabolic motions by proving that under the same hypothesis it is possible to get that the completely parabolic motion starting at x has zero angular momentum. We achieve this by characterizing the rotation invariant weak KAM solutions as those defining a lamination on the configuration space by free time minimizers with zero angular momentum.

  20. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  1. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  2. Finite element procedures for time-dependent convection-diffusion-reaction systems

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Park, Y. J.; Deans, H. A.

    1988-01-01

    New finite element procedures based on the streamline-upwind/Petrov-Galerkin formulations are developed for time-dependent convection-diffusion-reaction equations. These procedures minimize spurious oscillations for convection-dominated and reaction-dominated problems. The results obtained for representative numerical examples are accurate with minimal oscillations. As a special application problem, the single-well chemical tracer test (a procedure for measuring oil remaining in a depleted field) is simulated numerically. The results show the importance of temperature effects on the interpreted value of residual oil saturation from such tests.

  3. Drag Minimization for Wings and Bodies in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Fuller, Franklyn B

    1958-01-01

    The minimization of inviscid fluid drag is studied for aerodynamic shapes satisfying the conditions of linearized theory, and subject to imposed constraints on lift, pitching moment, base area, or volume. The problem is transformed to one of determining two-dimensional potential flows satisfying either Laplace's or Poisson's equations with boundary values fixed by the imposed conditions. A general method for determining integral relations between perturbation velocity components is developed. This analysis is not restricted in application to optimum cases; it may be used for any supersonic wing problem.

  4. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  5. Local Risk-Minimization for Defaultable Claims with Recovery Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagini, Francesca, E-mail: biagini@mathematik.uni-muenchen.de; Cretarola, Alessandra, E-mail: alessandra.cretarola@dmi.unipg.it

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,{tau} Logical-And T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  6. The Role of Design-of-Experiments in Managing Flow in Compact Air Vehicle Inlets

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Miller, Daniel N.; Gridley, Marvin C.; Agrell, Johan

    2003-01-01

    It is the purpose of this study to demonstrate the viability and economy of Design-of-Experiments methodologies to arrive at microscale secondary flow control array designs that maintain optimal inlet performance over a wide range of the mission variables and to explore how these statistical methods provide a better understanding of the management of flow in compact air vehicle inlets. These statistical design concepts were used to investigate the robustness properties of low unit strength micro-effector arrays. Low unit strength micro-effectors are micro-vanes set at very low angles-of-incidence with very long chord lengths. They were designed to influence the near wall inlet flow over an extended streamwise distance, and their advantage lies in low total pressure loss and high effectiveness in managing engine face distortion. The term robustness is used in this paper in the same sense as it is used in the industrial problem solving community. It refers to minimizing the effects of the hard-to-control factors that influence the development of a product or process. In Robustness Engineering, the effects of the hard-to-control factors are often called noise , and the hard-to-control factors themselves are referred to as the environmental variables or sometimes as the Taguchi noise variables. Hence Robust Optimization refers to minimizing the effects of the environmental or noise variables on the development (design) of a product or process. In the management of flow in compact inlets, the environmental or noise variables can be identified with the mission variables. Therefore this paper formulates a statistical design methodology that minimizes the impact of variations in the mission variables on inlet performance and demonstrates that these statistical design concepts can lead to simpler inlet flow management systems.

  7. Neighboring Extremal Guidance for Systems with Piecewise Linear Control Using Time As the Reference Variable

    DTIC Science & Technology

    1993-05-01

    obtained to provide a nominal control history . The guidance law is found by minimizing the V second variation of the suboptimal trajectory...deviations from the suboptimal trajectory to required changes in the nominal control history . The deviations from the suboptimal trajectory, used together...with the precomputed gains, determines the change in the nominal control history required to meet the final constraints while minimizing the change in

  8. Perspectives of Disciplinary Problems and Practices in Elementary Schools

    ERIC Educational Resources Information Center

    Huger Marsh, Darlene P.

    2012-01-01

    Ill-discipline in public schools predates compulsory education in the United States. Disciplinary policies and laws enacted to combat the problem have met with minimal success. Research and recommendations have generally focused on the indiscipline problems ubiquitous in intermediate, junior and senior high schools. However, similar misbehaviors…

  9. Minimalism as a Guiding Principle: Linking Mathematical Learning to Everyday Knowledge

    ERIC Educational Resources Information Center

    Inoue, Noriyuki

    2008-01-01

    Studies report that students often fail to consider familiar aspects of reality in solving mathematical word problems. This study explored how different features of mathematical problems influence the way that undergraduate students employ realistic considerations in mathematical problem solving. Incorporating familiar contents in the word…

  10. Rescuing the MaxEnt treatment for q-generalized entropies

    NASA Astrophysics Data System (ADS)

    Plastino, A.; Rocca, M. C.

    2018-02-01

    It has been recently argued that the MaxEnt variational problem would not adequately work for Renyi's and Tsallis' entropies. We constructively show here that this is not so if one formulates the associated variational problem in a more orthodox functional fashion.

  11. A dynamic unilateral contact problem with adhesion and friction in viscoelasticity

    NASA Astrophysics Data System (ADS)

    Cocou, Marius; Schryve, Mathieu; Raous, Michel

    2010-08-01

    The aim of this paper is to study an interaction law coupling recoverable adhesion, friction and unilateral contact between two viscoelastic bodies of Kelvin-Voigt type. A dynamic contact problem with adhesion and nonlocal friction is considered and its variational formulation is written as the coupling between an implicit variational inequality and a parabolic variational inequality describing the evolution of the intensity of adhesion. The existence and approximation of variational solutions are analysed, based on a penalty method, some abstract results and compactness properties. Finally, some numerical examples are presented.

  12. Acoustic transducer apparatus with reduced thermal conduction

    NASA Technical Reports Server (NTRS)

    Lierke, Ernst G. (Inventor); Leung, Emily W. (Inventor); Bhat, Balakrishna T. (Inventor)

    1990-01-01

    A horn is described for transmitting sound from a transducer to a heated chamber containing an object which is levitated by acoustic energy while it is heated to a molten state, which minimizes heat transfer to thereby minimize heating of the transducer, minimize temperature variation in the chamber, and minimize loss of heat from the chamber. The forward portion of the horn, which is the portion closest to the chamber, has holes that reduce its cross-sectional area to minimize the conduction of heat along the length of the horn, with the entire front portion of the horn being rigid and having an even front face to efficiently transfer high frequency acoustic energy to fluid in the chamber. In one arrangement, the horn has numerous rows of holes extending perpendicular to the length of horn, with alternate rows extending perpendicular to one another to form a sinuous path for the conduction of heat along the length of the horn.

  13. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  14. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  15. Rare events in finite and infinite dimensions

    NASA Astrophysics Data System (ADS)

    Reznikoff, Maria G.

    Thermal noise introduces stochasticity into deterministic equations and makes possible events which are never seen in the zero temperature setting. The driving force behind the thesis work is a desire to bring analysis and probability to bear on a class of relevant and intriguing physical problems, and in so doing, to allow applications to drive the development of new mathematical theory. The unifying theme is the study of rare events under the influence of small, random perturbations, and the manifold mathematical problems which ensue. In the first part, we apply large deviation theory and prefactor estimates to a coherent rotation micromagnetic model in order to analyze thermally activated magnetic switching. We consider recent physical experiments and the mathematical questions "asked" by them. A stochastic resonance type phenomenon is discovered, leading to the definition of finite temperature astroids. Non-Arrhenius behavior is discussed. The analysis is extended to ramped astroids. In addition, we discover that for low damping and ultrashort pulses, deterministic effects can override thermal effects, in accord with very recent ultrashort pulse experiments. Even more interesting, perhaps, is the study of large deviations in the infinite dimensional context, i.e. in spatially extended systems. Inspired by recent numerical investigations, we study the stochastically perturbed Allen Cahn and Cahn Hilliard equations. For the Allen Cahn equation, we study the action minimization problem (a deterministic variational problem) and prove the action scaling in four parameter regimes, via upper and lower bounds. The sharp interface limit is studied. We formally derive a reduced action functional which lends insight into the connection between action minimization and curvature flow. For the Cahn Hilliard equation, we prove upper and lower bounds for the scaling of the energy barrier in the nucleation and growth regime. Finally, we consider rare events in large or infinite domains, in one spatial dimension. We introduce a natural reference measure through which to analyze the invariant measure of stochastically perturbed, nonlinear partial differential equations. Also, for noisy reaction diffusion equations with an asymmetric potential, we discover how to rescale space and time in order to map the dynamics in the zero temperature limit to the Poisson Model, a simple version of the Johnson-Mehl-Avrami-Kolmogorov model for nucleation and growth.

  16. Ergonomic Training for Tomorrow's Office.

    ERIC Educational Resources Information Center

    Gross, Clifford M.; Chapnik, Elissa Beth

    1987-01-01

    The authors focus on issues related to the continual use of video display terminals in the office, including safety and health regulations, potential health problems, and the role of training in minimizing work-related health problems. (CH)

  17. Poverty-Exploitation-Alienation.

    ERIC Educational Resources Information Center

    Bronfenbrenner, Martin

    1980-01-01

    Illustrates how knowledge derived from the discipline of economics can be used to help shed light on social problems such as poverty, exploitation, and alienation, and can help decision makers form policy to minimize these and similar problems. (DB)

  18. Variational finite-difference methods in linear and nonlinear problems of the deformation of metallic and composite shells (review)

    NASA Astrophysics Data System (ADS)

    Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.

    2012-11-01

    Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)

  19. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  20. Minimal Interventions in the Teaching of Mathematics

    ERIC Educational Resources Information Center

    Foster, Colin

    2014-01-01

    This paper addresses ways in which mathematics pedagogy can benefit from insights gleaned from counselling. Person-centred counselling stresses the value of genuineness, warm empathetic listening and minimal intervention to support people in solving their own problems and developing increased autonomy. Such an approach contrasts starkly with the…

Top