Sample records for operator convex functions

  1. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  2. Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dvijotham, Krishnamurthy; Low, Steven; Chertkov, Michael

    2015-01-12

    Power systems are undergoing unprecedented transformations with increased adoption of renewables and distributed generation, as well as the adoption of demand response programs. All of these changes, while making the grid more responsive and potentially more efficient, pose significant challenges for power systems operators. Conventional operational paradigms are no longer sufficient as the power system may no longer have big dispatchable generators with sufficient positive and negative reserves. This increases the need for tools and algorithms that can efficiently predict safe regions of operation of the power system. In this paper, we study energy functions as a tool to designmore » algorithms for various operational problems in power systems. These have a long history in power systems and have been primarily applied to transient stability problems. In this paper, we take a new look at power systems, focusing on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex in these variables. We show that this corresponds naturally with standard operational constraints imposed in power systems. We show that power of equations can be solved using this approach, as long as the solution lies within the convexity domain. We outline various desirable properties of solutions in the convexity domain and present simple numerical illustrations supporting our results.« less

  3. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  4. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  5. Time-frequency filtering and synthesis from convex projections

    NASA Astrophysics Data System (ADS)

    White, Langford B.

    1990-11-01

    This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.

  6. Relaxation in control systems of subdifferential type

    NASA Astrophysics Data System (ADS)

    Tolstonogov, A. A.

    2006-02-01

    In a separable Hilbert space we consider a control system with evolution operators that are subdifferentials of a proper convex lower semicontinuous function depending on time. The constraint on the control is given by a multivalued function with non-convex values that is lower semicontinuous with respect to the variable states. Along with the original system we consider the system in which the constraint on the control is the upper semicontinuous convex-valued regularization of the original constraint. We study relations between the solution sets of these systems. As an application we consider a control variational inequality. We give an example of a control system of parabolic type with an obstacle.

  7. Convex functions and some inequalities in terms of the Non-Newtonian Calculus

    NASA Astrophysics Data System (ADS)

    Unluyol, Erdal; Salas, Seren; Iscan, Imdat

    2017-04-01

    Differentiation and integration are basic operations of calculus and analysis. Indeed, they are many versions of the subtraction and addition operations on numbers, respectively. From 1967 till 1970 Michael Grossman and Robert Katz [1] gave definitions of a new kind of derivative and integral, converting the roles of subtraction and addition into division and multiplication, and thus establish a new calculus, called Non-Newtonian Calculus. So, in this paper, it is investigated to the convex functions and some inequalities in terms of Non-Newtonian Calculus. Then we compare with the Newtonian and Non-Newtonian Calculus.

  8. CPU timing routines for a CONVEX C220 computer system

    NASA Technical Reports Server (NTRS)

    Bynum, Mary Ann

    1989-01-01

    The timing routines available on the CONVEX C220 computer system in the Structural Mechanics Division (SMD) at NASA Langley Research Center are examined. The function of the timing routines, the use of the timing routines in sequential, parallel, and vector code, and the interpretation of the results from the timing routines with respect to the CONVEX model of computing are described. The timing routines available on the SMD CONVEX fall into two groups. The first group includes standard timing routines generally available with UNIX 4.3 BSD operating systems, while the second group includes routines unique to the SMD CONVEX. The standard timing routines described in this report are /bin/csh time,/bin/time, etime, and ctime. The routines unique to the SMD CONVEX are getinfo, second, cputime, toc, and a parallel profiling package made up of palprof, palinit, and palsum.

  9. Collision detection for spacecraft proximity operations

    NASA Technical Reports Server (NTRS)

    Vaughan, Robin M.; Bergmann, Edward V.; Walker, Bruce K.

    1991-01-01

    A new collision detection algorithm has been developed for use when two spacecraft are operating in the same vicinity. The two spacecraft are modeled as unions of convex polyhedra, where the resulting polyhedron many be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. Contacts between the vertices, faces, and edges of the polyhedra representing the two spacecraft are shown to occur when the value of one or more of a set of functions is zero. The collision detection algorithm is then formulated as a search for the zeros (roots) of these functions. Special properties of the functions for the assumed relative trajectory are exploited to expedite the zero search. The new algorithm is the first algorithm that can solve the collision detection problem exactly for relative motion with constant angular velocity. This is a significant improvement over models of rotational motion used in previous collision detection algorithms.

  10. Space ultra-vacuum facility and method of operation

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J. (Inventor)

    1988-01-01

    A wake shield space processing facility (10) for maintaining ultra-high levels of vacuum is described. The wake shield (12) is a truncated hemispherical section having a convex side (14) and a concave side (24). Material samples (68) to be processed are located on the convex side of the shield, which faces in the wake direction in operation in orbit. Necessary processing fixtures (20) and (22) are also located on the convex side. Support equipment including power supplies (40, 42), CMG package (46) and electronic control package (44) are located on the convex side (24) of the shield facing the ram direction. Prior to operation in orbit the wake shield is oriented in reverse with the convex side facing the ram direction to provide cleaning by exposure to ambient atomic oxygen. The shield is then baked-out by being pointed directed at the sun to obtain heating for a suitable period.

  11. On approximation and energy estimates for delta 6-convex functions.

    PubMed

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  12. Collision detection for spacecraft proximity operations. Ph.D. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Vaughan, Robin M.

    1987-01-01

    The development of a new collision detection algorithm to be used when two spacecraft are operating in the same vicinity is described. The two spacecraft are modeled as unions of convex polyhedra, where the polyhedron resulting from the union may be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. The algorithm determines if a collision is possible and, if so, predicts the time when the collision will take place. The theoretical basis for the new collision detection algorithm is the C-function formulation of the configuration space approach recently introduced by researchers in robotics. Three different types of C-functions are defined that model the contacts between the vertices, edges, and faces of the polyhedra representing the two spacecraft. The C-functions are shown to be transcendental functions of time for the assumed trajectory of the moving spacecraft. The capabilities of the new algorithm are demonstrated for several example cases.

  13. Chromatically corrected virtual image visual display. [reducing eye strain in flight simulators

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M., Jr. (Inventor)

    1980-01-01

    An in-line, three element, large diameter, optical display lens is disclosed which has a front convex-convex element, a central convex-concave element, and a rear convex-convex element. The lens, used in flight simulators, magnifies an image presented on a television monitor and, by causing light rays leaving the lens to be in essentially parallel paths, reduces eye strain of the simulator operator.

  14. Radius of convexity of a certain class of close-to-convex functions

    NASA Astrophysics Data System (ADS)

    Yahya, Abdullah; Soh, Shaharuddin Cik

    2017-11-01

    In the present paper, we consider and investigate a certain class of close-to-convex functions that defined in the unit disk, U = {z : |z| < 1}, which denotes as Re { ei αz/f '(z ) f (z )-f (-z ) } >δ where |α| < π, cos (α) > δ and 0 δ <1. Furthermore, we obtain preliminary result for bound f'(z) and determine result for radius of convexity.

  15. On equivalent characterizations of convexity of functions

    NASA Astrophysics Data System (ADS)

    Gkioulekas, Eleftherios

    2013-04-01

    A detailed development of the theory of convex functions, not often found in complete form in most textbooks, is given. We adopt the strict secant line definition as the definitive definition of convexity. We then show that for differentiable functions, this definition becomes logically equivalent with the first derivative monotonicity definition and the tangent line definition. Consequently, for differentiable functions, all three characterizations are logically equivalent.

  16. Characterizations of matrix and operator-valued Φ-entropies, and operator Efron-Stein inequalities.

    PubMed

    Cheng, Hao-Chung; Hsieh, Min-Hsiu

    2016-03-01

    We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19 , 1-30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron-Stein inequality.

  17. Applying Workspace Limitations in a Velocity-Controlled Robotic Mechanism

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Platt, Robert J., Jr. (Inventor)

    2014-01-01

    A robotic system includes a robotic mechanism responsive to velocity control signals, and a permissible workspace defined by a convex-polygon boundary. A host machine determines a position of a reference point on the mechanism with respect to the boundary, and includes an algorithm for enforcing the boundary by automatically shaping the velocity control signals as a function of the position, thereby providing smooth and unperturbed operation of the mechanism along the edges and corners of the boundary. The algorithm is suited for application with higher speeds and/or external forces. A host machine includes an algorithm for enforcing the boundary by shaping the velocity control signals as a function of the reference point position, and a hardware module for executing the algorithm. A method for enforcing the convex-polygon boundary is also provided that shapes a velocity control signal via a host machine as a function of the reference point position.

  18. Image analysis of open-door laminoplasty for cervical spondylotic myelopathy: comparing the influence of cord morphology and spine alignment.

    PubMed

    Lin, Bon-Jour; Lin, Meng-Chi; Lin, Chin; Lee, Meei-Shyuan; Feng, Shao-Wei; Ju, Da-Tong; Ma, Hsin-I; Liu, Ming-Ying; Hueng, Dueng-Yuan

    2015-10-01

    Previous studies have identified the factors affecting the surgical outcome of cervical spondylotic myelopathy (CSM) following laminoplasty. Nonetheless, the effect of these factors remains controversial. It is unknown about the association between pre-operative cervical spinal cord morphology and post-operative imaging result following laminoplasty. The goal of this study is to analyze the impact of pre-operative cervical spinal cord morphology on post-operative imaging in patients with CSM. Twenty-six patients with CSM undergoing open-door laminoplasty were classified according to pre-operative cervical spine bony alignment and cervical spinal cord morphology, and the results were evaluated in terms of post-operative spinal cord posterior drift, and post-operative expansion of the antero-posterior dura diameter. By the result of study, pre-operative spinal cord morphology was an effective classification in predicting surgical outcome - patients with anterior convexity type, description of cervical spinal cord morphology, had more spinal cord posterior migration than those with neutral or posterior convexity type after open-door laminoplasty. Otherwise, the interesting finding was that cervical spine Cobb's angle had an impact on post-operative spinal cord posterior drift in patients with neutral or posterior convexity type spinal cord morphology - the degree of kyphosis was inversely proportional to the distance of post-operative spinal cord posterior drift, but not in the anterior convexity type. These findings supported that pre-operative cervical spinal cord morphology may be used as screening for patients undergoing laminoplasty. Patients having neutral or posterior convexity type spinal cord morphology accompanied with kyphotic deformity were not suitable candidates for laminoplasty. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  20. Characterizations of matrix and operator-valued Φ-entropies, and operator Efron–Stein inequalities

    PubMed Central

    Cheng, Hao-Chung; Hsieh, Min-Hsiu

    2016-01-01

    We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19, 1–30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron–Stein inequality. PMID:27118909

  1. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  2. Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity

    NASA Astrophysics Data System (ADS)

    Briec, Walter; Horvath, Charles

    2008-05-01

    -convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.

  3. Convexity of level lines of Martin functions and applications

    NASA Astrophysics Data System (ADS)

    Gallagher, A.-K.; Lebl, J.; Ramachandran, K.

    2018-01-01

    Let Ω be an unbounded domain in R× Rd. A positive harmonic function u on Ω that vanishes on the boundary of Ω is called a Martin function. In this note, we show that, when Ω is convex, the superlevel sets of a Martin function are also convex. As a consequence we obtain that if in addition Ω has certain symmetry with respect to the t-axis, and partial Ω is sufficiently flat, then the maximum of any Martin function along a slice Ω \\cap ({t}× R^d) is attained at (t, 0).

  4. Piecewise convexity of artificial neural networks.

    PubMed

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  6. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  7. On new fractional Hermite-Hadamard type inequalities for n-time differentiable quasi-convex functions and P-functions

    NASA Astrophysics Data System (ADS)

    Set, Erhan; Özdemir, M. Emin; Alan, E. Aykan

    2017-04-01

    In this article, by using the Hölder's inequality and power mean inequality the authors establish several inequalities of Hermite-Hadamard type for n- time differentiable quasi-convex functions and P- functions involving Riemann-Liouville fractional integrals.

  8. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  9. A 'range test' for determining scatterers with unknown physical properties

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Sylvester, John; Kusiak, Steven

    2003-06-01

    We describe a new scheme for determining the convex scattering support of an unknown scatterer when the physical properties of the scatterers are not known. The convex scattering support is a subset of the scatterer and provides information about its location and estimates for its shape. For convex polygonal scatterers the scattering support coincides with the scatterer and we obtain full shape reconstructions. The method will be formulated for the reconstruction of the scatterers from the far field pattern for one or a few incident waves. The method is non-iterative in nature and belongs to the type of recently derived generalized sampling schemes such as the 'no response test' of Luke-Potthast. The range test operates by testing whether it is possible to analytically continue a far field to the exterior of any test domain Omegatest. By intersecting the convex hulls of various test domains we can produce a minimal convex set, the convex scattering support of which must be contained in the convex hull of the support of any scatterer which produces that far field. The convex scattering support is calculated by testing the range of special integral operators for a sampling set of test domains. The numerical results can be used as an approximation for the support of the unknown scatterer. We prove convergence and regularity of the scheme and show numerical examples for sound-soft, sound-hard and medium scatterers. We can apply the range test to non-convex scatterers as well. We can conclude that an Omegatest which passes the range test has a non-empty intersection with the infinity-support (the complement of the unbounded component of the complement of the support) of the true scatterer, but cannot find a minimal set which must be contained therein.

  10. Convex Clustering: An Attractive Alternative to Hierarchical Clustering

    PubMed Central

    Chen, Gary K.; Chi, Eric C.; Ranola, John Michael O.; Lange, Kenneth

    2015-01-01

    The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/ PMID:25965340

  11. Convex clustering: an attractive alternative to hierarchical clustering.

    PubMed

    Chen, Gary K; Chi, Eric C; Ranola, John Michael O; Lange, Kenneth

    2015-05-01

    The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/.

  12. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  13. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  14. Convex Graph Invariants

    DTIC Science & Technology

    2010-12-02

    Motzkin, T. and Straus, E. (1965). Maxima for graphs and a new proof of a theorem of Turan . Canad. J. Math. 17 533–540. [33] Rendl, F. and Sotirov, R...Convex Graph Invariants Venkat Chandrasekaran, Pablo A . Parrilo, and Alan S. Willsky ∗ Laboratory for Information and Decision Systems Department of...this paper we study convex graph invariants, which are graph invariants that are convex functions of the adjacency matrix of a graph. Some examples

  15. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  16. Resource allocation in shared spectrum access communications for operators with diverse service requirements

    NASA Astrophysics Data System (ADS)

    Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki

    2016-12-01

    In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.

  17. Convexity and concavity constants in Lorentz and Marcinkiewicz spaces

    NASA Astrophysics Data System (ADS)

    Kaminska, Anna; Parrish, Anca M.

    2008-07-01

    We provide here the formulas for the q-convexity and q-concavity constants for function and sequence Lorentz spaces associated to either decreasing or increasing weights. It yields also the formula for the q-convexity constants in function and sequence Marcinkiewicz spaces. In this paper we extent and enhance the results from [G.J.O. Jameson, The q-concavity constants of Lorentz sequence spaces and related inequalities, Math. Z. 227 (1998) 129-142] and [A. Kaminska, A.M. Parrish, The q-concavity and q-convexity constants in Lorentz spaces, in: Banach Spaces and Their Applications in Analysis, Conference in Honor of Nigel Kalton, May 2006, Walter de Gruyter, Berlin, 2007, pp. 357-373].

  18. Convexity of quantum χ2-divergence.

    PubMed

    Hansen, Frank

    2011-06-21

    The general quantum χ(2)-divergence has recently been introduced by Temme et al. [Temme K, Kastoryano M, Ruskai M, Wolf M, Verstrate F (2010) J Math Phys 51:122201] and applied to quantum channels (quantum Markov processes). The quantum χ(2)-divergence is not unique, as opposed to the classical χ(2)-divergence, but depends on the choice of quantum statistics. It was noticed that the elements in a particular one-parameter family of quantum χ(2)-divergences are convex functions in the density matrices (ρ,σ), thus mirroring the convexity of the classical χ(2)(p,q)-divergence in probability distributions (p,q). We prove that any quantum χ(2)-divergence is a convex function in its two arguments.

  19. Convexity Conditions and the Legendre-Fenchel Transform for the Product of Finitely Many Positive Definite Quadratic Forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Yunbin, E-mail: zhaoyy@maths.bham.ac.u

    2010-12-15

    While the product of finitely many convex functions has been investigated in the field of global optimization, some fundamental issues such as the convexity condition and the Legendre-Fenchel transform for the product function remain unresolved. Focusing on quadratic forms, this paper is aimed at addressing the question: When is the product of finitely many positive definite quadratic forms convex, and what is the Legendre-Fenchel transform for it? First, we show that the convexity of the product is determined intrinsically by the condition number of so-called 'scaled matrices' associated with quadratic forms involved. The main result claims that if the conditionmore » number of these scaled matrices are bounded above by an explicit constant (which depends only on the number of quadratic forms involved), then the product function is convex. Second, we prove that the Legendre-Fenchel transform for the product of positive definite quadratic forms can be expressed, and the computation of the transform amounts to finding the solution to a system of equations (or equally, finding a Brouwer's fixed point of a mapping) with a special structure. Thus, a broader question than the open 'Question 11' in Hiriart-Urruty (SIAM Rev. 49, 225-273, 2007) is addressed in this paper.« less

  20. Convex composite wavelet frame and total variation-based image deblurring using nonconvex penalty functions

    NASA Astrophysics Data System (ADS)

    Shen, Zhengwei; Cheng, Lishuang

    2017-09-01

    Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.

  1. Optimization with Fuzzy Data via Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Kosiński, Witold

    2010-09-01

    Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.

  2. The effects of a convex rear-view mirror on ocular accommodative responses.

    PubMed

    Nagata, Tatsuo; Iwasaki, Tsuneto; Kondo, Hiroyuki; Tawara, Akihiko

    2013-11-01

    Convex mirrors are universally used as rear-view mirrors in automobiles. However, the ocular accommodative responses during the use of these mirrors have not yet been examined. This study investigated the effects of a convex mirror on the ocular accommodative systems. Seven young adults with normal visual functions were ordered to binocularly watch an object in a convex or plane mirror. The accommodative responses were measured with an infrared optometer. The average of the accommodation of all subjects while viewing the object in the convex mirror were significantly nearer than in the plane mirror, although all subjects perceived the position of the object in the convex mirror as being farther away. Moreover, the fluctuations of accommodation were significantly larger for the convex mirror. The convex mirror caused the 'false recognition of distance', which induced the large accommodative fluctuations and blurred vision. Manufactures should consider the ocular accommodative responses as a new indicator for increasing automotive safety. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Relative entropy of steering: on its definition and properties

    NASA Astrophysics Data System (ADS)

    Kaur, Eneet; Wilde, Mark M.

    2017-11-01

    In Gallego and Aolita (2015 Phys. Rev. X 5 041008), the authors proposed a definition for the relative entropy of steering and showed that the resulting quantity is a convex steering monotone. Here we advocate for a different definition for relative entropy of steering, based on well grounded concerns coming from quantum Shannon theory. We prove that this modified relative entropy of steering is a convex steering monotone. Furthermore, we establish that it is uniformly continuous and faithful, in both cases giving quantitative bounds that should be useful in applications. We also consider a restricted relative entropy of steering which is relevant for the case in which the free operations in the resource theory of steering have a more restricted form (the restricted operations could be more relevant in practical scenarios). The restricted relative entropy of steering is convex, monotone with respect to these restricted operations, uniformly continuous, and faithful.

  4. Inequalities of extended beta and extended hypergeometric functions.

    PubMed

    Mondal, Saiful R

    2017-01-01

    We study the log-convexity of the extended beta functions. As a consequence, we establish Turán-type inequalities. The monotonicity, log-convexity, log-concavity of extended hypergeometric functions are deduced by using the inequalities on extended beta functions. The particular cases of those results also give the Turán-type inequalities for extended confluent and extended Gaussian hypergeometric functions. Some reverses of Turán-type inequalities are also derived.

  5. Algorithms for Mathematical Programming with Emphasis on Bi-level Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldfarb, Donald; Iyengar, Garud

    2014-05-22

    The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.

  6. Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Hu, Guoqiang

    2018-05-01

    In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.

  7. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  8. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  9. A free boundary approach to the Rosensweig instability of ferrofluids

    NASA Astrophysics Data System (ADS)

    Parini, Enea; Stylianou, Athanasios

    2018-04-01

    We establish the existence of saddle points for a free boundary problem describing the two-dimensional free surface of a ferrofluid undergoing normal field instability. The starting point is the ferrohydrostatic equations for the magnetic potentials in the ferrofluid and air, and the function describing their interface. These constitute the strong form for the Euler-Lagrange equations of a convex-concave functional, which we extend to include interfaces that are not necessarily graphs of functions. Saddle points are then found by iterating the direct method of the calculus of variations and applying classical results of convex analysis. For the existence part, we assume a general nonlinear magnetization law; for a linear law, we also show, via convex duality, that the saddle point is a constrained minimizer of the relevant energy functional.

  10. Stereotype locally convex spaces

    NASA Astrophysics Data System (ADS)

    Akbarov, S. S.

    2000-08-01

    We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.

  11. Another convex combination of product states for the separable Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028

    2006-03-15

    In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less

  12. Online Pairwise Learning Algorithms.

    PubMed

    Ying, Yiming; Zhou, Ding-Xuan

    2016-04-01

    Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.

  13. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    PubMed

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2006-04-01

    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics, biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components that give nonnegative second divided differences (convexity) and one separate section of optimal components that give nonpositive second divided differences (concavity). The solution process finds the joint (that is the inflection point estimate of the underlying function) of the sections automatically. The underlying method is iterative, each iteration solving a structured strictly convex quadratic programming problem in order to obtain a convex or a concave section over a subrange of data. Restrictions on the complexity of the problem:Number of data, n, is not limited in the software package, but is limited to 2000 in the main driver. The total work of the method requires 2n-2 structured quadratic programming calculations over subranges of data, which in practice does not exceed the amount of O(n) computer operations. Typical running times:CPU time on a PC with an Intel 733 MHz processor operating in Windows 98: About 2 s to smooth n=1000 noisy measurements that follow the shape of the sine function over one period. Summary:L2CXCV is a package of Fortran 77 subroutines for least squares smoothing to n univariate data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is unknown. The piecewise linear interpolant to the smoothed values gives a convex/concave fit to the data. The underlying algorithm is based on the property that in this best convex/concave fit, the convex and the concave section are both optimal and separate. The algorithm is iterative, each iteration solving a strictly convex quadratic programming problem for the best convex fit to the first k data, starting from the best convex fit to the first k-1 data. By reversing the order and sign of the data, the algorithm obtains the best concave fit to the last n-k data. Then it chooses that k as the optimal position of the required sign change (which defines the inflection point of the fit), if the convex and the concave components to the first k and the last n-k data, respectively, form a convex/concave vector that gives the least sum of squares of residuals. In effect the algorithm requires at most 2n-2 quadratic programming calculations over subranges of data. The package employs a technique for quadratic programming, which takes advantage of a B-spline representation of the smoothed values and makes use of some efficient O(k) updating procedures, where k is the number of data of a subrange. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes that is about n, thus exhibiting quadratic performance in n. The Fortran codes have been designed to minimize the use of computing resources. Attention has been given to computer rounding errors details, which are essential to the robustness of the software package. Numerical examples with output are provided to help the use of the software and exhibit certain features of the method. Distribution material that includes driver programs, technical details of the installation of the package and test examples that demonstrate the use of the software is available in an ASCII file that accompanies this work.

  15. Image restoration by the method of convex projections: part 1 theory.

    PubMed

    Youla, D C; Webb, H

    1982-01-01

    A projection operator onto a closed convex set in Hilbert space is one of the few examples of a nonlinear map that can be defined in simple abstract terms. Moreover, it minimizes distance and is nonexpansive, and therefore shares two of the more important properties of ordinary linear orthogonal projections onto closed linear manifolds. In this paper, we exploit the properties of these operators to develop several iterative algorithms for image restoration from partial data which permit any number of nonlinear constraints of a certain type to be subsumed automatically. Their common conceptual basis is as follows. Every known property of an original image f is envisaged as restricting it to lie in a well-defined closed convex set. Thus, m such properties place f in the intersection E(0) = E(i) of the corresponding closed convex sets E(1),E(2),...EE(m). Given only the projection operators PE(i) onto the individual E(i)'s, i = 1 --> m, we restore f by recursive means. Clearly, in this approach, the realization of the P(i)'s in a Hilbert space setting is one of the major synthesis problems. Section I describes the geometrical significance of the three main theorems in considerable detail, and most of the underlying ideas are illustrated with the aid of simple diagrams. Section II presents rules for the numerical implementation of 11 specific projection operators which are found to occur frequently in many signal-processing applications, and the Appendix contains proofs of all the major results.

  16. Fast Fuzzy Arithmetic Operations

    NASA Technical Reports Server (NTRS)

    Hampton, Michael; Kosheleva, Olga

    1997-01-01

    In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).

  17. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2014-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n -gon, our construction produces 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n ( n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called 'serendipity' elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed.

  18. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  19. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  20. Inhibitory competition in figure-ground perception: context and convexity.

    PubMed

    Peterson, Mary A; Salvagio, Elizabeth

    2008-12-15

    Convexity has long been considered a potent cue as to which of two regions on opposite sides of an edge is the shaped figure. Experiment 1 shows that for a single edge, there is only a weak bias toward seeing the figure on the convex side. Experiments 1-3 show that the bias toward seeing the convex side as figure increases as the number of edges delimiting alternating convex and concave regions increases, provided that the concave regions are homogeneous in color. The results of Experiments 2 and 3 rule out a probability summation explanation for these context effects. Taken together, the results of Experiments 1-3 show that the homogeneity versus heterogeneity of the convex regions is irrelevant. Experiment 4 shows that homogeneity of alternating regions is not sufficient for context effects; a cue that favors the perception of the intervening regions as figures is necessary. Thus homogeneity alone does not alone operate as a background cue. We interpret our results within a model of figure-ground perception in which shape properties on opposite sides of an edge compete for representation and the competitive strength of weak competitors is further reduced when they are homogeneous.

  1. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  2. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  3. Allometric relationships between traveltime channel networks, convex hulls, and convexity measures

    NASA Astrophysics Data System (ADS)

    Tay, Lea Tien; Sagar, B. S. Daya; Chuah, Hean Teik

    2006-06-01

    The channel network (S) is a nonconvex set, while its basin [C(S)] is convex. We remove open-end points of the channel connectivity network iteratively to generate a traveltime sequence of networks (Sn). The convex hulls of these traveltime networks provide an interesting topological quantity, which has not been noted thus far. We compute lengths of shrinking traveltime networks L(Sn) and areas of corresponding convex hulls C(Sn), the ratios of which provide convexity measures CM(Sn) of traveltime networks. A statistically significant scaling relationship is found for a model network in the form L(Sn) ˜ A[C(Sn)]0.57. From the plots of the lengths of these traveltime networks and the areas of their corresponding convex hulls as functions of convexity measures, new power law relations are derived. Such relations for a model network are CM(Sn) ˜ ? and CM(Sn) ˜ ?. In addition to the model study, these relations for networks derived from seven subbasins of Cameron Highlands region of Peninsular Malaysia are provided. Further studies are needed on a large number of channel networks of distinct sizes and topologies to understand the relationships of these new exponents with other scaling exponents that define the scaling structure of river networks.

  4. Pin stack array for thermoacoustic energy conversion

    DOEpatents

    Keolian, Robert M.; Swift, Gregory W.

    1995-01-01

    A thermoacoustic stack for connecting two heat exchangers in a thermoacoustic energy converter provides a convex fluid-solid interface in a plane perpendicular to an axis for acoustic oscillation of fluid between the two heat exchangers. The convex surfaces increase the ratio of the fluid volume in the effective thermoacoustic volume that is displaced from the convex surface to the fluid volume that is adjacent the surface within which viscous energy losses occur. Increasing the volume ratio results in an increase in the ratio of transferred thermal energy to viscous energy losses, with a concomitant increase in operating efficiency of the thermoacoustic converter. The convex surfaces may be easily provided by a pin array having elements arranged parallel to the direction of acoustic oscillations and with effective radial dimensions much smaller than the thicknesses of the viscous energy loss and thermoacoustic energy transfer volumes.

  5. Density-functional theory for internal magnetic fields

    NASA Astrophysics Data System (ADS)

    Tellgren, Erik I.

    2018-01-01

    A density-functional theory is developed based on the Maxwell-Schrödinger equation with an internal magnetic field in addition to the external electromagnetic potentials. The basic variables of this theory are the electron density and the total magnetic field, which can equivalently be represented as a physical current density. Hence, the theory can be regarded as a physical current density-functional theory and an alternative to the paramagnetic current density-functional theory due to Vignale and Rasolt. The energy functional has strong enough convexity properties to allow a formulation that generalizes Lieb's convex analysis formulation of standard density-functional theory. Several variational principles as well as a Hohenberg-Kohn-like mapping between potentials and ground-state densities follow from the underlying convex structure. Moreover, the energy functional can be regarded as the result of a standard approximation technique (Moreau-Yosida regularization) applied to the conventional Schrödinger ground-state energy, which imposes limits on the maximum curvature of the energy (with respect to the magnetic field) and enables construction of a (Fréchet) differentiable universal density functional.

  6. Nonlocal continuum analysis of a nonlinear uniaxial elastic lattice system under non-uniform axial load

    NASA Astrophysics Data System (ADS)

    Hérisson, Benjamin; Challamel, Noël; Picandet, Vincent; Perrot, Arnaud

    2016-09-01

    The static behavior of the Fermi-Pasta-Ulam (FPU) axial chain under distributed loading is examined. The FPU system examined in the paper is a nonlinear elastic lattice with linear and quadratic spring interaction. A dimensionless parameter controls the possible loss of convexity of the associated quadratic and cubic energy. Exact analytical solutions based on Hurwitz zeta functions are developed in presence of linear static loading. It is shown that this nonlinear lattice possesses scale effects and possible localization properties in the absence of energy convexity. A continuous approach is then developed to capture the main phenomena observed regarding the discrete axial problem. The associated continuum is built from a continualization procedure that is mainly based on the asymptotic expansion of the difference operators involved in the lattice problem. This associated continuum is an enriched gradient-based or nonlocal axial medium. A Taylor-based and a rational differential method are both considered in the continualization procedures to approximate the FPU lattice response. The Padé approximant used in the continualization procedure fits the response of the discrete system efficiently, even in the vicinity of the limit load when the non-convex FPU energy is examined. It is concluded that the FPU lattice system behaves as a nonlocal axial system in dynamic but also static loading.

  7. Evaluating convex roof entanglement measures.

    PubMed

    Tóth, Géza; Moroder, Tobias; Gühne, Otfried

    2015-04-24

    We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.

  8. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES

    PubMed Central

    RAND, ALEXANDER; GILLETTE, ANDREW; BAJAJ, CHANDRAJIT

    2013-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n-gon, our construction produces 2n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n(n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called ‘serendipity’ elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed. PMID:25301974

  9. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  10. On the complexity of a combined homotopy interior method for convex programming

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  11. A search asymmetry reversed by figure-ground assignment.

    PubMed

    Humphreys, G W; Müller, H

    2000-05-01

    We report evidence demonstrating that a search asymmetry favoring concave over convex targets can be reversed by altering the figure-ground assignment of edges in shapes. Visual search for a concave target among convex distractors is faster than search for a convex target among concave distractors (a search asymmetry). By using shapes with ambiguous local figure-ground relations, we demonstrated that search can be efficient (with search slopes around 10 ms/item) or inefficient (with search slopes around 30-40 ms/item) with the same stimuli, depending on whether edges are assigned to concave or convex "figures." This assignment process can operate in a top-down manner, according to the task set. The results suggest that attention is allocated to spatial regions following the computation of figure-ground relations in parallel across the elements present. This computation can also be modulated by top-down processes.

  12. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  13. Posterior spinal fusion for adolescent idiopathic scoliosis using a convex pedicle screw technique: a novel concept of deformity correction.

    PubMed

    Tsirikos, A I; Mataliotakis, G; Bounakis, N

    2017-08-01

    We present the results of correcting a double or triple curve adolescent idiopathic scoliosis using a convex segmental pedicle screw technique. We reviewed 191 patients with a mean age at surgery of 15 years (11 to 23.3). Pedicle screws were placed at the convexity of each curve. Concave screws were inserted at one or two cephalad levels and two caudal levels. The mean operating time was 183 minutes (132 to 276) and the mean blood loss 0.22% of the total blood volume (0.08% to 0.4%). Multimodal monitoring remained stable throughout the operation. The mean hospital stay was 6.8 days (5 to 15). The mean post-operative follow-up was 5.8 years (2.5 to 9.5). There were no neurological complications, deep wound infection, obvious nonunion or need for revision surgery. Upper thoracic scoliosis was corrected by a mean 68.2% (38% to 48%, p < 0.001). Main thoracic scoliosis was corrected by a mean 71% (43.5% to 8.9%, p < 0.001). Lumbar scoliosis was corrected by a mean 72.3% (41% to 90%, p < 0.001). No patient lost more than 3° of correction at follow-up. The thoracic kyphosis improved by 13.1° (-21° to 49°, p < 0.001); the lumbar lordosis remained unchanged (p = 0.58). Coronal imbalance was corrected by a mean 98% (0% to 100%, p < 0.001). Sagittal imbalance was corrected by a mean 96% (20% to 100%, p < 0.001). The Scoliosis Research Society Outcomes Questionnaire score improved from a mean 3.6 to 4.6 (2.4 to 4, p < 0.001); patient satisfaction was a mean 4.9 (4.8 to 5). This technique carries low neurological and vascular risks because the screws are placed in the pedicles of the convex side of the curve, away from the spinal cord, cauda equina and the aorta. A low implant density (pedicle screw density 1.2, when a density of 2 represents placement of pedicle screws bilaterally at every instrumented segment) achieved satisfactory correction of the scoliosis, an improved thoracic kyphosis and normal global sagittal balance. Both patient satisfaction and functional outcomes were excellent. Cite this article: Bone Joint J 2017;99-B:1080-7. ©2017 The British Editorial Society of Bone & Joint Surgery.

  14. Generalized Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming.

    DTIC Science & Technology

    1983-04-11

    existing ones. * -37- !I T-472 REFERENCES [1] Avriel, M., W. E. Diewert, S. Schaible and W. T. Ziemba (1981). Introduction to concave and generalized concave...functions. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba , eds.), Academic Press, New York, pp. 21-50. (21 Bank...Optimality conditions involving generalized convex mappings. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba

  15. The Role of Hellinger Processes in Mathematical Finance

    NASA Astrophysics Data System (ADS)

    Choulli, T.; Hurd, T. R.

    2001-09-01

    This paper illustrates the natural role that Hellinger processes can play in solving problems from ¯nance. We propose an extension of the concept of Hellinger process applicable to entropy distance and f-divergence distances, where f is a convex logarithmic function or a convex power function with general order q, 0 6= q < 1. These concepts lead to a new approach to Merton's optimal portfolio problem and its dual in general L¶evy markets.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  17. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  18. Posterior multilevel vertebral osteotomy for correction of severe and rigid neuromuscular scoliosis: a preliminary study.

    PubMed

    Suh, Seung Woo; Modi, Hitesh N; Yang, Jaehyuk; Song, Hae-Ryong; Jang, Ki-Mo

    2009-05-20

    Prospective study. To determine the effectiveness and correction with posterior multilevel vertebral osteotomy in severe and rigid curves without anterior release. For the correction of severe and rigid scoliotic curve, anterior-posterior combined or posterior vertebral column resection (PVCR) procedures are used. Anterior procedure might compromise pulmonary functions, and PVCR might carry risk of neurologic injuries. Therefore, authors developed a new technique, which reduces both. Thirteen neuromuscular patients (7 cerebral palsy, 2 Duchenne muscular dystrophy, and 4 spinal muscular atrophy) who had rigid curve >100 degrees were prospectively selected. All were operated with posterior-only approach using pedicle screw construct. To achieve desired correction, posterior multilevel vertebral osteotomies were performed at 3 to 5 levels (apex, and 1-2 levels above and below apex) through partial laminotomy sites connecting from concave to convex side, just above the pedicle; and repeated cantilever manipulation was applied over temporary short-segment fixation, above and below the apex, on convex side. On concave side, rod was assembled with screws and rod-derotation maneuver was performed. Finally, short-segment fixation on convex side was replaced with full-length construct. Intraoperative MEP monitoring was applied in all. Mean age was 21 years and average follow-up was 25 months. Average preoperative flexibility was 20.3% (24.1 degrees). Average Cobb's angle, pelvic obliquity, and apical rotation were 118.2 degrees, 16.7 degrees, and 57 degrees preoperatively, respectively, and 48.8 degrees, 8 degrees, and 43 degrees after surgery showing significant correction of 59.4%, 46.1%, and 24.5%. Average number of osteotomy level was 4.2 and average blood loss was 3356 +/- 884 mL. Mean operation time was 330 +/- 46 minutes. None of the patient required postoperative ventilator support or displayed any signs of neurologic or vascular injuries during or after the operation. This technique should be recommended because (1) it provides release of anterior column without anterior approach and (2) our results supports its superiority as a technique.

  19. Transition operators in electromagnetic-wave diffraction theory. II - Applications to optics

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    1993-01-01

    The theory developed by Hahne (1992) for the diffraction of time-harmonic electromagnetic waves from fixed obstacles is briefly summarized and extended. Applications of the theory are considered which comprise, first, a spherical harmonic expansion of the so-called radiation impedance operator in the theory, for a spherical surface, and second, a reconsideration of familiar short-wavelength approximation from the new standpoint, including a derivation of the so-called physical optics method on the basis of quasi-planar approximation to the radiation impedance operator, augmented by the method of stationary phase. The latter includes a rederivation of the geometrical optics approximation for the complete Green's function for the electromagnetic field in the presence of a smooth- and a convex-surfaced perfectly electrically conductive obstacle.

  20. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  1. On the convexity of ROC curves estimated from radiological test results

    PubMed Central

    Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.

    2010-01-01

    Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155

  2. Resolvent positive linear operators exhibit the reduction phenomenon

    PubMed Central

    Altenberg, Lee

    2012-01-01

    The spectral bound, s(αA + βV), of a combination of a resolvent positive linear operator A and an operator of multiplication V, was shown by Kato to be convex in . Kato's result is shown here to imply, through an elementary “dual convexity” lemma, that s(αA + βV) is also convex in α > 0, and notably, ∂s(αA + βV)/∂α ≤ s(A). Diffusions typically have s(A) ≤ 0, so that for diffusions with spatially heterogeneous growth or decay rates, greater mixing reduces growth. Models of the evolution of dispersal in particular have found this result when A is a Laplacian or second-order elliptic operator, or a nonlocal diffusion operator, implying selection for reduced dispersal. These cases are shown here to be part of a single, broadly general, “reduction” phenomenon. PMID:22357763

  3. Entropy and convexity for nonlinear partial differential equations

    PubMed Central

    Ball, John M.; Chen, Gui-Qiang G.

    2013-01-01

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue. PMID:24249768

  4. Entropy and convexity for nonlinear partial differential equations.

    PubMed

    Ball, John M; Chen, Gui-Qiang G

    2013-12-28

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue.

  5. Dynamic history-dependent variational-hemivariational inequalities with applications to contact mechanics

    NASA Astrophysics Data System (ADS)

    Migórski, Stanislaw; Ogorzaly, Justyna

    2017-02-01

    In the paper we deliver a new existence and uniqueness result for a class of abstract nonlinear variational-hemivariational inequalities which are governed by two operators depending on the history of the solution, and include two nondifferentiable functionals, a convex and a nonconvex one. Then, we consider an initial boundary value problem which describes a model of evolution of a viscoelastic body in contact with a foundation. The contact process is assumed to be dynamic, and the friction is described by subdifferential boundary conditions. Both the constitutive law and the contact condition involve memory operators. As an application of the abstract theory, we provide a result on the unique weak solvability of the contact problem.

  6. Computation of convex bounds for present value functions with random payments

    NASA Astrophysics Data System (ADS)

    Ahcan, Ales; Darkiewicz, Grzegorz; Goovaerts, Marc; Hoedemakers, Tom

    2006-02-01

    In this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by developing upper and lower bounds in the convex-order sense for present value functions. Technically speaking, our methodology is an extension of the results of Dhaene et al. [Insur. Math. Econom. 31(1) (2002) 3-33, Insur. Math. Econom. 31(2) (2002) 133-161] to the case of scalar products of mutually independent random vectors.

  7. A system of nonlinear set valued variational inclusions.

    PubMed

    Tang, Yong-Kun; Chang, Shih-Sen; Salahuddin, Salahuddin

    2014-01-01

    In this paper, we studied the existence theorems and techniques for finding the solutions of a system of nonlinear set valued variational inclusions in Hilbert spaces. To overcome the difficulties, due to the presence of a proper convex lower semicontinuous function ϕ and a mapping g which appeared in the considered problems, we have used the resolvent operator technique to suggest an iterative algorithm to compute approximate solutions of the system of nonlinear set valued variational inclusions. The convergence of the iterative sequences generated by algorithm is also proved. 49J40; 47H06.

  8. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  9. Generalized vector calculus on convex domain

    NASA Astrophysics Data System (ADS)

    Agrawal, Om P.; Xu, Yufeng

    2015-06-01

    In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.

  10. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less

  11. Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization

    NASA Astrophysics Data System (ADS)

    Adhikari, Sam

    2007-11-01

    Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.

  12. Min-Max Spaces and Complexity Reduction in Min-Max Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaubert, Stephane, E-mail: Stephane.Gaubert@inria.fr; McEneaney, William M., E-mail: wmceneaney@ucsd.edu

    2012-06-15

    Idempotent methods have been found to be extremely helpful in the numerical solution of certain classes of nonlinear control problems. In those methods, one uses the fact that the value function lies in the space of semiconvex functions (in the case of maximizing controllers), and approximates this value using a truncated max-plus basis expansion. In some classes, the value function is actually convex, and then one specifically approximates with suprema (i.e., max-plus sums) of affine functions. Note that the space of convex functions is a max-plus linear space, or moduloid. In extending those concepts to game problems, one finds amore » different function space, and different algebra, to be appropriate. Here we consider functions which may be represented using infima (i.e., min-max sums) of max-plus affine functions. It is natural to refer to the class of functions so represented as the min-max linear space (or moduloid) of max-plus hypo-convex functions. We examine this space, the associated notion of duality and min-max basis expansions. In using these methods for solution of control problems, and now games, a critical step is complexity-reduction. In particular, one needs to find reduced-complexity expansions which approximate the function as well as possible. We obtain a solution to this complexity-reduction problem in the case of min-max expansions.« less

  13. Usefulness of the convexity apparent hyperperfusion sign in 123I-iodoamphetamine brain perfusion SPECT for the diagnosis of idiopathic normal pressure hydrocephalus.

    PubMed

    Ohmichi, Takuma; Kondo, Masaki; Itsukage, Masahiro; Koizumi, Hidetaka; Matsushima, Shigenori; Kuriyama, Nagato; Ishii, Kazunari; Mori, Etsuro; Yamada, Kei; Mizuno, Toshiki; Tokuda, Takahiko

    2018-03-16

    OBJECTIVE The gold standard for the diagnosis of idiopathic normal pressure hydrocephalus (iNPH) is the CSF removal test. For elderly patients, however, a less invasive diagnostic method is required. On MRI, high-convexity tightness was reported to be an important finding for the diagnosis of iNPH. On SPECT, patients with iNPH often show hyperperfusion of the high-convexity area. The authors tested 2 hypotheses regarding the SPECT finding: 1) it is relative hyperperfusion reflecting the increased gray matter density of the convexity, and 2) it is useful for the diagnosis of iNPH. The authors termed the SPECT finding the convexity apparent hyperperfusion (CAPPAH) sign. METHODS Two clinical studies were conducted. In study 1, SPECT was performed for 20 patients suspected of having iNPH, and regional cerebral blood flow (rCBF) of the high-convexity area was examined using quantitative analysis. Clinical differences between patients with the CAPPAH sign (CAP) and those without it (NCAP) were also compared. In study 2, the CAPPAH sign was retrospectively assessed in 30 patients with iNPH and 19 healthy controls using SPECT images and 3D stereotactic surface projection. RESULTS In study 1, rCBF of the high-convexity area of the CAP group was calculated as 35.2-43.7 ml/min/100 g, which is not higher than normal values of rCBF determined by SPECT. The NCAP group showed lower cognitive function and weaker responses to the removal of CSF than the CAP group. In study 2, the CAPPAH sign was positive only in patients with iNPH (24/30) and not in controls (sensitivity 80%, specificity 100%). The coincidence rate between tight high convexity on MRI and the CAPPAH sign was very high (28/30). CONCLUSIONS Patients with iNPH showed hyperperfusion of the high-convexity area on SPECT; however, the presence of the CAPPAH sign did not indicate real hyperperfusion of rCBF in the high-convexity area. The authors speculated that patients with iNPH without the CAPPAH sign, despite showing tight high convexity on MRI, might have comorbidities such as Alzheimer's disease.

  14. Integer Partitions and Convexity

    NASA Astrophysics Data System (ADS)

    Bouroubi, Sadek

    2007-06-01

    Let n be an integer >=1, and let p(n,k) and P(n,k) count the number of partitions of n into k parts, and the number of partitions of n into parts less than or equal to k, respectively. In this paper, we show that these functions are convex. The result includes the actual value of the constant of Bateman and Erdos.

  15. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  16. Orthogonal polynomials for refinable linear functionals

    NASA Astrophysics Data System (ADS)

    Laurie, Dirk; de Villiers, Johan

    2006-12-01

    A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.

  17. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  18. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  19. A Convex Approach to Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.

  20. Density of convex intersections and applications

    PubMed Central

    Rautenberg, C. N.; Rösel, S.

    2017-01-01

    In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301

  1. Reducing the duality gap in partially convex programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Correa, R.

    1994-12-31

    We consider the non-linear minimization program {alpha} = min{sub z{element_of}D, x{element_of}C}{l_brace}f{sub 0}(z, x) : f{sub i}(z, x) {<=} 0, i {element_of} {l_brace}1, ..., m{r_brace}{r_brace} where f{sub i}(z, {center_dot}) are convex functions, C is convex and D is compact. Following Ben-Tal, Eiger and Gershowitz we prove the existence of a partial dual program whose optimum is arbitrarily close to {alpha}. The idea, corresponds to the branching principle in Branch and Bound methods. We describe such a kind of algorithm for obtaining the desired partial dual.

  2. Probability Distributions for Random Quantum Operations

    NASA Astrophysics Data System (ADS)

    Schultz, Kevin

    Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.

  3. Extremal edges: a powerful cue to depth perception and figure-ground organization.

    PubMed

    Palmer, Stephen E; Ghose, Tandra

    2008-01-01

    Extremal edges (EEs) are projections of viewpoint-specific horizons of self-occlusion on smooth convex surfaces. An ecological analysis of viewpoint constraints suggests that an EE surface is likely to be closer to the observer than the non-EE surface on the other side of the edge. In two experiments, one using shading gradients and the other using texture gradients, we demonstrated that EEs operate as strong cues to relative depth perception and figure-ground organization. Image regions with an EE along the shared border were overwhelmingly perceived as closer than either flat or equally convex surfaces without an EE along that border. A further demonstration suggests that EEs are more powerful than classical figure-ground cues, including even the joint effects of small size, convexity, and surroundedness.

  4. Convex Arrhenius plots and their interpretation

    PubMed Central

    Truhlar, Donald G.; Kohen, Amnon

    2001-01-01

    This paper draws attention to selected experiments on enzyme-catalyzed reactions that show convex Arrhenius plots, which are very rare, and points out that Tolman's interpretation of the activation energy places a fundamental model-independent constraint on any detailed explanation of these reactions. The analysis presented here shows that in such systems, the rate coefficient as a function of energy is not just increasing more slowly than expected, it is actually decreasing. This interpretation of the data provides a constraint on proposed microscopic models, i.e., it requires that any successful model of a reaction with a convex Arrhenius plot should be consistent with the microcanonical rate coefficient being a decreasing function of energy. The implications and limitations of this analysis to interpreting enzyme mechanisms are discussed. This model-independent conclusion has broad applicability to all fields of kinetics, and we also draw attention to an analogy with diffusion in metastable fluids and glasses. PMID:11158559

  5. The nonconvex multi-dimensional Riemann problem for Hamilton-Jacobi equations

    NASA Technical Reports Server (NTRS)

    Bardi, Martino; Osher, Stanley

    1991-01-01

    Simple inequalities are presented for the viscosity solution of a Hamilton-Jacobi equation in N space dimensions when neither the initial data nor the Hamiltonian need be convex (or concave). The initial data are uniformly Lipschitz and can be written as the sum of a convex function in a group of variables and a concave function in the remaining variables, therefore including the nonconvex Riemann problem. The inequalities become equalities wherever a 'maxmin' equals a 'minmax', and thus a representation formula for this problem is obtained, generalizing the classical Hopi formulas.

  6. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  7. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  8. Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.

    PubMed

    Al-Mulhem, M; Al-Maghrabi, T

    1998-01-01

    This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.

  9. Adaptive terminal sliding mode control for hypersonic flight vehicles with strictly lower convex function based nonlinear disturbance observer.

    PubMed

    Wu, Yun-Jie; Zuo, Jing-Xing; Sun, Liang-Hua

    2017-11-01

    In this paper, the altitude and velocity tracking control of a generic hypersonic flight vehicle (HFV) is considered. A novel adaptive terminal sliding mode controller (ATSMC) with strictly lower convex function based nonlinear disturbance observer (SDOB) is proposed for the longitudinal dynamics of HFV in presence of both parametric uncertainties and external disturbances. First, for the sake of enhancing the anti-interference capability, SDOB is presented to estimate and compensate the equivalent disturbances by introducing a strictly lower convex function. Next, the SDOB based ATSMC (SDOB-ATSMC) is proposed to guarantee the system outputs track the reference trajectory. Then, stability of the proposed control scheme is analyzed by the Lyapunov function method. Compared with other HFV control approaches, key novelties of SDOB-ATSMC are that a novel SDOB is proposed and drawn into the (virtual) control laws to compensate the disturbances and that several adaptive laws are used to deal with the differential explosion problem. Finally, it is illustrated by the simulation results that the new method exhibits an excellent robustness and a better disturbance rejection performance than the convention approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  11. Convex central configurations for the n-body problem

    NASA Astrophysics Data System (ADS)

    Xia, Zhihong

    We give a simple proof of a classical result of MacMillan and Bartky (Trans. Amer. Math. Soc. 34 (1932) 838) which states that, for any four positive masses and any assigned order, there is a convex planar central configuration. Moreover, we show that the central configurations we find correspond to local minima of the potential function with fixed moment of inertia. This allows us to show that there are at least six local minimum central configurations for the planar four-body problem. We also show that for any assigned order of five masses, there is at least one convex spatial central configuration of local minimum type. Our method also applies to some other cases.

  12. Trade-off between reservoir yield and evaporation losses as a function of lake morphology in semi-arid Brazil.

    PubMed

    Campos, José N B; Lima, Iran E; Studart, Ticiana M C; Nascimento, Luiz S V

    2016-05-31

    This study investigates the relationships between yield and evaporation as a function of lake morphology in semi-arid Brazil. First, a new methodology was proposed to classify the morphology of 40 reservoirs in the Ceará State, with storage capacities ranging from approximately 5 to 4500 hm3. Then, Monte Carlo simulations were conducted to study the effect of reservoir morphology (including real and simplified conical forms) on the water storage process at different reliability levels. The reservoirs were categorized as convex (60.0%), slightly convex (27.5%) or linear (12.5%). When the conical approximation was used instead of the real lake form, a trade-off occurred between reservoir yield and evaporation losses, with different trends for the convex, slightly convex and linear reservoirs. Using the conical approximation, the water yield prediction errors reached approximately 5% of the mean annual inflow, which is negligible for large reservoirs. However, for smaller reservoirs, this error became important. Therefore, this paper presents a new procedure for correcting the yield-evaporation relationships that were obtained by assuming a conical approximation rather than the real reservoir morphology. The combination of this correction with the Regulation Triangle Diagram is useful for rapidly and objectively predicting reservoir yield and evaporation losses in semi-arid environments.

  13. Interactive Reference Point Procedure Based on the Conic Scalarizing Function

    PubMed Central

    2014-01-01

    In multiobjective optimization methods, multiple conflicting objectives are typically converted into a single objective optimization problem with the help of scalarizing functions. The conic scalarizing function is a general characterization of Benson proper efficient solutions of non-convex multiobjective problems in terms of saddle points of scalar Lagrangian functions. This approach preserves convexity. The conic scalarizing function, as a part of a posteriori or a priori methods, has successfully been applied to several real-life problems. In this paper, we propose a conic scalarizing function based interactive reference point procedure where the decision maker actively takes part in the solution process and directs the search according to her or his preferences. An algorithmic framework for the interactive solution of multiple objective optimization problems is presented and is utilized for solving some illustrative examples. PMID:24723795

  14. AUC-based biomarker ensemble with an application on gene scores predicting low bone mineral density.

    PubMed

    Zhao, X G; Dai, W; Li, Y; Tian, L

    2011-11-01

    The area under the receiver operating characteristic (ROC) curve (AUC), long regarded as a 'golden' measure for the predictiveness of a continuous score, has propelled the need to develop AUC-based predictors. However, the AUC-based ensemble methods are rather scant, largely due to the fact that the associated objective function is neither continuous nor concave. Indeed, there is no reliable numerical algorithm identifying optimal combination of a set of biomarkers to maximize the AUC, especially when the number of biomarkers is large. We have proposed a novel AUC-based statistical ensemble methods for combining multiple biomarkers to differentiate a binary response of interest. Specifically, we propose to replace the non-continuous and non-convex AUC objective function by a convex surrogate loss function, whose minimizer can be efficiently identified. With the established framework, the lasso and other regularization techniques enable feature selections. Extensive simulations have demonstrated the superiority of the new methods to the existing methods. The proposal has been applied to a gene expression dataset to construct gene expression scores to differentiate elderly women with low bone mineral density (BMD) and those with normal BMD. The AUCs of the resulting scores in the independent test dataset has been satisfactory. Aiming for directly maximizing AUC, the proposed AUC-based ensemble method provides an efficient means of generating a stable combination of multiple biomarkers, which is especially useful under the high-dimensional settings. lutian@stanford.edu. Supplementary data are available at Bioinformatics online.

  15. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES

    PubMed Central

    GILLETTE, ANDREW; RAND, ALEXANDER; BAJAJ, CHANDRAJIT

    2016-01-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties. PMID:28077939

  16. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES.

    PubMed

    Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

    2016-10-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties.

  17. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  18. A new convexity measure for polygons.

    PubMed

    Zunic, Jovisa; Rosin, Paul L

    2004-07-01

    Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.

  19. Remodelling of the bovine placenta: Comprehensive morphological and histomorphological characterization at the late embryonic and early accelerated fetal growth stages.

    PubMed

    Estrella, Consuelo Amor S; Kind, Karen L; Derks, Anna; Xiang, Ruidong; Faulkner, Nicole; Mohrdick, Melina; Fitzsimmons, Carolyn; Kruk, Zbigniew; Grutzner, Frank; Roberts, Claire T; Hiendleder, Stefan

    2017-07-01

    Placental function impacts growth and development with lifelong consequences for performance and health. We provide novel insights into placental development in bovine, an important agricultural species and biomedical model. Concepti with defined genetics and sex were recovered from nulliparous dams managed under standardized conditions to study placental gross morphological and histomorphological parameters at the late embryo (Day48) and early accelerated fetal growth (Day153) stages. Placentome number increased 3-fold between Day48 and Day153. Placental barrier thickness was thinner, and volume of placental components, and surface areas and densities were higher at Day153 than Day48. We confirmed two placentome types, flat and convex. At Day48, there were more convex than flat placentomes, and convex placentomes had a lower proportion of maternal connective tissue (P < 0.01). However, this was reversed at Day153, where convex placentomes were lower in number and had greater volume of placental components (P < 0.01- P < 0.001) and greater surface area (P < 0.001) than flat placentomes. Importantly, embryo (r = 0.50) and fetal (r = 0.30) weight correlated with total number of convex but not flat placentomes. Extensive remodelling of the placenta increases capacity for nutrient exchange to support rapidly increasing embryo-fetal weight from Day48 to Day153. The cellular composition of convex placentomes, and exclusive relationships between convex placentome number and embryo-fetal weight, provide strong evidence for these placentomes as drivers of prenatal growth. The difference in proportion of maternal connective tissue between placentome types at Day48 suggests that this tissue plays a role in determining placentome shape, further highlighting the importance of early placental development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Note on a Family of Monotone Quantum Relative Entropies

    NASA Astrophysics Data System (ADS)

    Deuchert, Andreas; Hainzl, Christian; Seiringer, Robert

    2015-10-01

    Given a convex function and two hermitian matrices A and B, Lewin and Sabin study in (Lett Math Phys 104:691-705, 2014) the relative entropy defined by . Among other things, they prove that the so-defined quantity is monotone if and only if is operator monotone. The monotonicity is then used to properly define for bounded self-adjoint operators acting on an infinite-dimensional Hilbert space by a limiting procedure. More precisely, for an increasing sequence of finite-dimensional projections with strongly, the limit is shown to exist and to be independent of the sequence of projections . The question whether this sequence converges to its "obvious" limit, namely , has been left open. We answer this question in principle affirmatively and show that . If the operators A and B are regular enough, that is ( A - B), and are trace-class, the identity holds.

  1. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  2. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  3. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  4. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  5. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  6. Meshes optimized for discrete exterior calculus (DEC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousley, Sarah C.; Deakin, Michael; Knupp, Patrick

    We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximationmore » of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.« less

  7. Nonparametric instrumental regression with non-convex constraints

    NASA Astrophysics Data System (ADS)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  8. Iterative Potts and Blake–Zisserman minimization for the recovery of functions with discontinuities from indirect measurements

    PubMed Central

    Weinmann, Andreas; Storath, Martin

    2015-01-01

    Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074

  9. Pricing of Water Resources With Depletable Externality: The Effects of Pollution Charges

    NASA Astrophysics Data System (ADS)

    Kitabatake, Yoshifusa

    1990-04-01

    With an abstraction of a real-world situation, the paper views water resources as a depletable capital asset which yields a stream of services such as water supply and the assimilation of pollution discharge. The concept of the concave or convex water resource depletion function is then introduced and applied to a general two-sector, three-factor model. The main theoretical contribution is to prove that when the water resource depletion function is a concave rather than a convex function of pollution, it is more likely that gross regional income will increase with a higher pollution charge policy. The concavity of the function is meant to imply that with an increase in pollution released, the ability of supplying water at a certain minimum quality level diminishes faster and faster. A numerical example is also provided.

  10. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  11. Convergence of damped inertial dynamics governed by regularized maximally monotone operators

    NASA Astrophysics Data System (ADS)

    Attouch, Hedy; Cabot, Alexandre

    2018-06-01

    In a Hilbert space setting, we study the asymptotic behavior, as time t goes to infinity, of the trajectories of a second-order differential equation governed by the Yosida regularization of a maximally monotone operator with time-varying positive index λ (t). The dissipative and convergence properties are attached to the presence of a viscous damping term with positive coefficient γ (t). A suitable tuning of the parameters γ (t) and λ (t) makes it possible to prove the weak convergence of the trajectories towards zeros of the operator. When the operator is the subdifferential of a closed convex proper function, we estimate the rate of convergence of the values. These results are in line with the recent articles by Attouch-Cabot [3], and Attouch-Peypouquet [8]. In this last paper, the authors considered the case γ (t) = α/t, which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.

  12. Characteristic Functional of a Probability Measure Absolutely Continuous with Respect to a Gaussian Radon Measure

    DTIC Science & Technology

    1984-08-01

    12. PERSONAL AUTHORISI Hiroshi Sato 13* TYPE OF REPORT TECHNICAL 13b. TIME COVERED PROM TO 14. OATE OF REPORT (Yr. Mo., Day) Aug. 1984...nectuary and identify by bloc* number) Let p and p.. be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1...THIS PAGE ABSTRACT Let y and y1 be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1] posed the

  13. Explicit finite difference predictor and convex corrector with applications to hyperbolic partial differential equations

    NASA Technical Reports Server (NTRS)

    Dey, C.; Dey, S. K.

    1983-01-01

    An explicit finite difference scheme consisting of a predictor and a corrector has been developed and applied to solve some hyperbolic partial differential equations (PDEs). The corrector is a convex-type function which is applied at each time level and at each mesh point. It consists of a parameter which may be estimated such that for larger time steps the algorithm should remain stable and generate a fast speed of convergence to the steady-state solution. Some examples have been given.

  14. A sharp lower bound for the sum of a sine series with convex coefficients

    NASA Astrophysics Data System (ADS)

    Solodov, A. P.

    2016-12-01

    The sum of a sine series g(\\mathbf b,x)=\\sumk=1^∞ b_k\\sin kx with coefficients forming a convex sequence \\mathbf b is known to be positive on the interval (0,π). Its values near zero are conventionally evaluated using the Salem function v(\\mathbf b,x)=x\\sumk=1m(x) kb_k, m(x)=[π/x]. In this paper it is proved that 2π-2v(\\mathbf b,x) is not a minorant for g(\\mathbf b,x). The modified Salem function v_0(\\mathbf b,x)=x\\bigl(\\sumk=1m(x)-1 kb_k+(1/2)m(x)bm(x)\\bigr) is shown to satisfy the lower bound g(\\mathbf b,x)>2π-2v_0(\\mathbf b,x) in some right neighbourhood of zero. This estimate is shown to be sharp on the class of convex sequences \\mathbf b. Moreover, the upper bound for g(\\mathbf b,x) is refined on the class of monotone sequences \\mathbf b. Bibliography: 11 titles.

  15. The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface

    NASA Astrophysics Data System (ADS)

    Klass, E. V.

    2017-12-01

    The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.

  16. Global stability of plane Couette flow beyond the energy stability limit

    NASA Astrophysics Data System (ADS)

    Fuentes, Federico; Goluskin, David

    2017-11-01

    This talk will present computations verifying that the laminar state of plane Couette flow is nonlinearly stable to all perturbations. The Reynolds numbers up to which this globally stability is verified are larger than those at which stability can be proven by the energy method, which is the typical method for demonstrating nonlinear stability of a fluid flow. This improvement is achieved by constructing Lyapunov functions that are more general than the energy. These functions are not restricted to being quadratic, and they are allowed to depend explicitly on the spectrum of the velocity field in the eigenbasis of the energy stability operator. The optimal choice of such a Lyapunov function is a convex optimization problem, and it can be constructed with computer assistance by solving a semidefinite program. This general method will be described in a companion talk by David Goluskin; the present talk focuses on its application to plane Couette flow.

  17. Generalized Differential Calculus and Applications to Optimization

    NASA Astrophysics Data System (ADS)

    Rector, Robert Blake Hayden

    This thesis contains contributions in three areas: the theory of generalized calculus, numerical algorithms for operations research, and applications of optimization to problems in modern electric power systems. A geometric approach is used to advance the theory and tools used for studying generalized notions of derivatives for nonsmooth functions. These advances specifically pertain to methods for calculating subdifferentials and to expanding our understanding of a certain notion of derivative of set-valued maps, called the coderivative, in infinite dimensions. A strong understanding of the subdifferential is essential for numerical optimization algorithms, which are developed and applied to nonsmooth problems in operations research, including non-convex problems. Finally, an optimization framework is applied to solve a problem in electric power systems involving a smart solar inverter and battery storage system providing energy and ancillary services to the grid.

  18. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Tan, Swie-In [San Jose, CA; Reiss, Ira [New City, NY

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  19. L 1-2 minimization for exact and stable seismic attenuation compensation

    NASA Astrophysics Data System (ADS)

    Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang

    2018-06-01

    Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.

  20. Some methods for achieving more efficient performance of fuel assemblies

    NASA Astrophysics Data System (ADS)

    Boltenko, E. A.

    2014-07-01

    More efficient operation of reactor plant fuel assemblies can be achieved through the use of new technical solutions aimed at obtaining more uniform distribution of coolant over the fuel assembly section, more intense heat removal on convex heat-transfer surfaces, and higher values of departure from nucleate boiling ratio (DNBR). Technical solutions using which it is possible to obtain more intense heat removal on convex heat-transfer surfaces and higher DNBR values in reactor plant fuel assemblies are considered. An alternative heat removal arrangement is described using which it is possible to obtain a significantly higher power density in a reactor plant and essentially lower maximal fuel rod temperature.

  1. Formulation of image fusion as a constrained least squares optimization problem

    PubMed Central

    Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge

    2017-01-01

    Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885

  2. Monotone viable trajectories for functional differential inclusions

    NASA Astrophysics Data System (ADS)

    Haddad, Georges

    This paper is a study on functional differential inclusions with memory which represent the multivalued version of retarded functional differential equations. The main result gives a necessary and sufficient equations. The main result gives a necessary and sufficient condition ensuring the existence of viable trajectories; that means trajectories remaining in a given nonempty closed convex set defined by given constraints the system must satisfy to be viable. Some motivations for this paper can be found in control theory where F( t, φ) = { f( t, φ, u)} uɛU is the set of possible velocities of the system at time t, depending on the past history represented by the function φ and on a control u ranging over a set U of controls. Other motivations can be found in planning procedures in microeconomics and in biological evolutions where problems with memory do effectively appear in a multivalued version. All these models require viability constraints represented by a closed convex set.

  3. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  4. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  5. A symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming.

    PubMed

    Liu, Jing; Duan, Yongrui; Sun, Min

    2017-01-01

    This paper introduces a symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming with linear equality constraints, which inherits the superiorities of the classical alternating direction method of multipliers (ADMM), and which extends the feasible set of the relaxation factor α of the generalized ADMM to the infinite interval [Formula: see text]. Under the conditions that the objective function is convex and the solution set is nonempty, we establish the convergence results of the proposed method, including the global convergence, the worst-case [Formula: see text] convergence rate in both the ergodic and the non-ergodic senses, where k denotes the iteration counter. Numerical experiments to decode a sparse signal arising in compressed sensing are included to illustrate the efficiency of the new method.

  6. Feature Grouping and Selection Over an Undirected Graph.

    PubMed

    Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping

    2012-01-01

    High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.

  7. Compact high-efficiency 100-W-level diode-side-pumped Nd:YAG laser with linearly polarized TEM00 mode output.

    PubMed

    Xu, Yi-Ting; Xu, Jia-Lin; Guo, Ya-Ding; Yang, Feng-Tu; Chen, Yan-Zhong; Xu, Jian; Xie, Shi-Yong; Bo, Yong; Peng, Qin-Jun; Cui, Dafu; Xu, Zu-Yan

    2010-08-20

    We present a compact high-efficiency and high-average-power diode-side-pumped Nd:YAG rod laser oscillator operated with a linearly polarized fundamental mode. The oscillator resonator is based on an L-shaped convex-convex cavity with an improved module and a dual-rod configuration for birefringence compensation. Under a pump power of 344 W, a linearly polarized average output power of 101.4 W at 1064 nm is obtained, which corresponds to an optical-to-optical conversion efficiency of 29.4%. The laser is operated at a repetition rate of 400 Hz with a beam quality factor of M(2)=1.14. To the best of our knowledge, this is the highest optical-to-optical efficiency for a side-pumped TEM(00) Nd:YAG rod laser oscillator with a 100-W-level output ever reported.

  8. Intraspinal canal rod migration causing late-onset paraparesis 8 years after scoliosis surgery.

    PubMed

    Obeid, Ibrahim; Vital, Jean-Marc; Aurouer, Nicolas; Hansen, Steve; Gangnet, Nicolas; Pointillart, Vincent; Gille, Olivier; Boissiere, Louis; Quraishi, Nasir A

    2016-07-01

    Complete intraspinal canal rod migration with posterior bone reconstitution has never been described in the adolescent idiopathic scoliosis (AIS) population. We present an unusual but significant delayed neurological complication after spinal instrumentation surgery. A 24-year-old woman presented with lower limb weakness (ASIA D) 8 years after posterior instrumentation from T2 to L4 for AIS. CT scan and MRI demonstrated intra-canal rod migration with complete laminar reconstitution. The C-reactive protein was slightly elevated (fluctuated between 10 and 20 mg/l). Radiographs showed the convex rod had entered the spinal canal. The patient was taken into the operating room for thoracic spinal decompression and removal of the convex rod. This Cotrel-Dubousset rod, which had been placed on the convexity of the thoracic curve had completely entered the canal from T5 to T10 and was totally covered by bone with the eroded laminae entirely healed and closed. There was no pseudarthrosis. Intra-operatively, the fusion mass was opened along the whole length of this rod and the rod carefully removed and the spinal cord decompressed. The bacteriological cultures returned positive for Propionibacterium acnes. The patient recovered fully within 2 months post-operatively. We opine that the progressive laminar erosion with intra-canal rod migration resulted from mechanical and infectious-related factors. The very low virulence of the strain of Propionibacterium acnes is probably involved in this particular presentation where the rod was trapped in the canal, owing to the quite extensive laminar reconstitution.

  9. A new corrective technique for adolescent idiopathic scoliosis: convex manipulation using 6.35 mm diameter pure titanium rod followed by concave fixation using 6.35 mm diameter titanium alloy

    PubMed Central

    2015-01-01

    Background It has been thought that corrective posterior surgery for adolescent idiopathic scoliosis (AIS) should be started on the concave side because initial convex manipulation would increase the risk of vertebral malrotation, worsening the rib hump. With the many new materials, implants, and manipulation techniques (e.g., direct vertebral rotation) now available, we hypothesized that manipulating the convex side first is no longer taboo. Methods Our technique has two major facets. (1) Curve correction is started from the convex side with a derotation maneuver and in situ bending followed by concave rod application. (2) A 6.35 mm diameter pure titanium rod is used on the convex side and a 6.35 mm diameter titanium alloy rod on the concave side. Altogether, 52 patients were divided into two groups. Group N included 40 patients (3 male, 37 female; average age 15.9 years) of Lenke type 1 (23 patients), type 2 (2), type 3 (3), type 5 (10), type 6 (2). They were treated with a new technique using 6.35 mm diameter different-stiffness titanium rods. Group C included 12 patients (all female, average age 18.8 years) of Lenke type 1 (6 patients), type 2 (3), type 3 (1), type 5 (1), type 6 (1). They were treated with conventional methods using 5.5 mm diameter titanium alloy rods. Radiographic parameters (Cobb angle/thoracic kyphosis/correction rates) and perioperative data were retrospectively collected and analyzed. Results Preoperative main Cobb angles (groups N/C) were 56.8°/60.0°, which had improved to 15.2°/17.1° at the latest follow-up. Thoracic kyphosis increased from 16.8° to 21.3° in group N and from 16.0° to 23.4° in group C. Correction rates were 73.2% in group N and 71.7% in group C. There were no significant differences for either parameter. Mean operating time, however, was significantly shorter in group N (364 min) than in group C (456 min). Conclusion We developed a new corrective surgical technique for AIS using a 6.35 mm diameter pure titanium rod initially on the convex side. Correction rates in the coronal, sagittal, and axial planes were the same as those achieved with conventional methods, but the operation time was significantly shorter. PMID:25815053

  10. Jet printing of convex and concave polymer micro-lenses.

    PubMed

    Blattmann, M; Ocker, M; Zappe, H; Seifert, A

    2015-09-21

    We describe a novel approach for fabricating customized convex as well as concave micro-lenses using substrates with sophisticated pinning architecture and utilizing a drop-on-demand jet printer. The polymeric lens material deposited on the wafer is cured by UV light irradiation yielding lenses with high quality surfaces. Surface shape and roughness of the cured polymer lenses are characterized by white light interferometry. Their optical quality is demonstrated by imaging an USAF1951 test chart. The evaluated modulation transfer function is compared to Zemax simulations as a benchmark for the fabricated lenses.

  11. Temporary clamping of external carotid artery in convexity, parasagittal and temporal base meningioma.

    PubMed

    Yadav, Yad Ram; Parihar, Vijay; Agarwal, Moneet; Bhatele, Pushp Raj

    2012-01-01

    The management of intraoperative bleeding during removal of a large hyper vascular meningioma is crucial for safe and efficient surgery. Preoperative embolization of meningioma is the best way to reduce vascularity of meningiomas but this technique is not readily available, costly and has its own limitations. The study is aimed to evaluate the use of temporary clamping of external carotid artery to reduce blood loss and operating time during excision of large convexity, parasagittal or temporal base meningiomas. A prospective study of 115 consecutively operated meningiomas of size 5 cms or more were operated from January 2002 to December2010. Temporary clamping of external carotid artery was done in 61 while 51 cases were managed without clamping. There was significant reduction of blood loss, operative time and blood transfusion given in the temporary clipping group compared to non clipping group. There was stitch abscess in two patients each in clamping, and non clamping group. There was no scalp necrosis or mortality in any of the group. Temporary clamping of external carotid artery is a safe, simple and cost-effective alternative to embolization for the surgery of large meningiomas. This can be practiced at all the centers.

  12. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  13. Low-rank structure learning via nonconvex heuristic recovery.

    PubMed

    Deng, Yue; Dai, Qionghai; Liu, Risheng; Zhang, Zengke; Hu, Sanqing

    2013-03-01

    In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 < p < 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 < p < 1) for both data with higher rank and with denser corruptions.

  14. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  15. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  16. H∞ control for uncertain linear system over networks with Bernoulli data dropout and actuator saturation.

    PubMed

    Yu, Jimin; Yang, Chenchen; Tang, Xiaoming; Wang, Ping

    2018-03-01

    This paper investigates the H ∞ control problems for uncertain linear system over networks with random communication data dropout and actuator saturation. The random data dropout process is modeled by a Bernoulli distributed white sequence with a known conditional probability distribution and the actuator saturation is confined in a convex hull by introducing a group of auxiliary matrices. By constructing a quadratic Lyapunov function, effective conditions for the state feedback-based H ∞ controller and the observer-based H ∞ controller are proposed in the form of non-convex matrix inequalities to take the random data dropout and actuator saturation into consideration simultaneously, and the problem of non-convex feasibility is solved by applying cone complementarity linearization (CCL) procedure. Finally, two simulation examples are given to demonstrate the effectiveness of the proposed new design techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Weak convergence of a projection algorithm for variational inequalities in a Banach space

    NASA Astrophysics Data System (ADS)

    Iiduka, Hideaki; Takahashi, Wataru

    2008-03-01

    Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.

  18. POLYNOMIAL AND RATIONAL APPROXIMATION OF FUNCTIONS OF SEVERAL VARIABLES WITH CONVEX DERIVATIVES IN THE L_p-METRIC (0 < p\\leqslant\\infty)

    NASA Astrophysics Data System (ADS)

    Khatamov, A.

    1995-02-01

    Let \\operatorname{Conv}_n^{(l)}(\\mathscr{G}) be the set of all functions f such that for every n-dimensional unit vector \\mathbf{e} the lth derivative in the direction of \\mathbf{e}, D^{(l)}(\\mathbf{e})f, is continuous on a convex bounded domain \\mathscr{G}\\subset\\mathbf{R}^n ( n \\geqslant 2) and convex (upwards or downwards) on the nonempty intersection of every line L\\subset\\mathbf{R}^n with the domain \\mathscr{G}, and let M^{(l)}(f,\\mathscr{G}):= \\sup \\bigl\\{\\bigl\\Vert D^{(l)}(\\mathbf{e})f\\bigr\\Ve......})}\\colon\\mathbf{e}\\in\\mathbf{R}^n,\\,\\,\\Vert\\mathbf{e}\\Vert=1\\bigr\\} < \\infty. Sharp, in the sense of order of smallness, estimates of best simultaneous polynomial approximations of the functions f\\in\\operatorname{Conv}_n^{(l)}(\\mathscr{G}) for which D^{(l)}(\\mathbf{e})f\\in\\operatorname{Lip}_K 1 for every \\mathbf{e}, and their derivatives in the metrics of L_p(\\mathscr{G}) (0 < p\\leqslant\\infty) are obtained. It is proved that the corresponding parts of these estimates are preserved for best rational approximations, on any n-dimensional parallelepiped Q, of functions f\\in\\operatorname{Conv}_n^{(l)}(Q) in the metrics of L_p(Q) (0 < p < \\infty) and it is shown that they are sharp in the sense of order of smallness for 0 < p\\leqslant1.

  19. Determining Representative Elementary Volume For Multiple Petrophysical Parameters using a Convex Hull Analysis of Digital Rock Data

    NASA Astrophysics Data System (ADS)

    Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.

    2016-12-01

    Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.

  20. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.

  1. Convex Optimization over Classes of Multiparticle Entanglement

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Gühne, Otfried

    2018-02-01

    A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].

  2. Design of LPV fault-tolerant controller for pitch system of wind turbine

    NASA Astrophysics Data System (ADS)

    Wu, Dinghui; Zhang, Xiaolin

    2017-07-01

    To address failures of wind turbine pitch-angle sensors, traditional wind turbine linear parameter varying (LPV) model is transformed into a double-layer convex polyhedron LPV model. On the basis of this model, when the plurality of the sensor undergoes failure and details of the failure are inconvenient to obtain, each sub-controller is designed using distributed thought and gain scheduling method. The final controller is obtained using all of the sub-controllers by a convex combination. The design method corrects the errors of the linear model, improves the linear degree of the system, and solves the problem of multiple pitch angle faults to ensure stable operation of the wind turbine.

  3. On the convexity of ROC curves estimated from radiological test results.

    PubMed

    Pesce, Lorenzo L; Metz, Charles E; Berbaum, Kevin S

    2010-08-01

    Although an ideal observer's receiver operating characteristic (ROC) curve must be convex-ie, its slope must decrease monotonically-published fits to empirical data often display "hooks." Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This article aims to identify the practical implications of nonconvex ROC curves and the conditions that can lead to empirical or fitted ROC curves that are not convex. This article views nonconvex ROC curves from historical, theoretical, and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve does not cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any nonconvex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. In general, ROC curve fits that show hooks should be looked on with suspicion unless other arguments justify their presence. 2010 AUR. Published by Elsevier Inc. All rights reserved.

  4. Convex geometry of quantum resource quantification

    NASA Astrophysics Data System (ADS)

    Regula, Bartosz

    2018-01-01

    We introduce a framework unifying the mathematical characterisation of different measures of general quantum resources and allowing for a systematic way to define a variety of faithful quantifiers for any given convex quantum resource theory. The approach allows us to describe many commonly used measures such as matrix norm-based quantifiers, robustness measures, convex roof-based measures, and witness-based quantifiers together in a common formalism based on the convex geometry of the underlying sets of resource-free states. We establish easily verifiable criteria for a measure to possess desirable properties such as faithfulness and strong monotonicity under relevant free operations, and show that many quantifiers obtained in this framework indeed satisfy them for any considered quantum resource. We derive various bounds and relations between the measures, generalising and providing significantly simplified proofs of results found in the resource theories of quantum entanglement and coherence. We also prove that the quantification of resources in this framework simplifies for pure states, allowing us to obtain more easily computable forms of the considered measures, and show that many of them are in fact equal on pure states. Further, we investigate the dual formulation of resource quantifiers, which provide a characterisation of the sets of resource witnesses. We present an explicit application of the results to the resource theories of multi-level coherence, entanglement of Schmidt number k, multipartite entanglement, as well as magic states, providing insight into the quantification of the four resources by establishing novel quantitative relations and introducing new quantifiers, such as a measure of entanglement of Schmidt number k which generalises the convex roof-extended negativity, a measure of k-coherence which generalises the \

  5. Ultrasonographic evaluation of equine fetal growth throughout gestation in normal mares using a convex transducer.

    PubMed

    Murase, Harutaka; Endo, Yoshiro; Tsuchiya, Takeru; Kotoyori, Yasumitsu; Shikichi, Mitsumori; Ito, Katsumi; Sato, Fumio; Nambo, Yasuo

    2014-07-01

    It has not been common to perform regular ultrasound examination of the fetus in equine practice, due to the increasing volume of the uterus caused by fetal development. The convex three-dimensional transducer is bulb-shaped and is able to observe wide areas. In addition, its operation is simple, making it easy to create appropriate angles for various indices using a transrectal approach. The aim of this study was to measure Thoroughbred fetal growth indices throughout gestation using a convex transducer and to clarify the detectable period of some indices for clinical use. We demonstrated changes in fetal indices, such as crown rump length (CRL), fetal heart rate (FHR), fetal eye and kidney and the combined thickness of uterus and placenta (CTUP). CTUP increased from 30 weeks of gestation, and FHR peaked at 8 weeks and then decreased to term. CRL could be observed until 13 weeks due to its wide angle, longer than in previous reports. Fetal eye and kidney could be observed from 10 and 28 weeks, respectively, and these increased with pregnancy progress. The present results showed the advantage of transrectal examination using a convex transducer for evaluation of normal fetal development. Although ultrasonographic examination in mid- to late-gestation is not common in equine reproductive practice, our comprehensive results would be a useful basis for equine pregnancy examination.

  6. Geometric approach to segmentation and protein localization in cell culture assays.

    PubMed

    Raman, S; Maxwell, C A; Barcellos-Hoff, M H; Parvin, B

    2007-01-01

    Cell-based fluorescence imaging assays are heterogeneous and require the collection of a large number of images for detailed quantitative analysis. Complexities arise as a result of variation in spatial nonuniformity, shape, overlapping compartments and scale (size). A new technique and methodology has been developed and tested for delineating subcellular morphology and partitioning overlapping compartments at multiple scales. This system is packaged as an integrated software platform for quantifying images that are obtained through fluorescence microscopy. Proposed methods are model based, leveraging geometric shape properties of subcellular compartments and corresponding protein localization. From the morphological perspective, convexity constraint is imposed to delineate and partition nuclear compartments. From the protein localization perspective, radial symmetry is imposed to localize punctate protein events at submicron resolution. Convexity constraint is imposed against boundary information, which are extracted through a combination of zero-crossing and gradient operator. If the convexity constraint fails for the boundary then positive curvature maxima are localized along the contour and the entire blob is partitioned into disjointed convex objects representing individual nuclear compartment, by enforcing geometric constraints. Nuclear compartments provide the context for protein localization, which may be diffuse or punctate. Punctate signal are localized through iterative voting and radial symmetries for improved reliability and robustness. The technique has been tested against 196 images that were generated to study centrosome abnormalities. Corresponding computed representations are compared against manual counts for validation.

  7. Convex Lattice Polygons

    ERIC Educational Resources Information Center

    Scott, Paul

    2006-01-01

    A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.

  8. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  9. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  10. The Thermal Equilibrium Solution of a Generic Bipolar Quantum Hydrodynamic Model

    NASA Astrophysics Data System (ADS)

    Unterreiter, Andreas

    The thermal equilibrium state of a bipolar, isothermic quantum fluid confined to a bounded domain ,d = 1,2 or d = 3 is entirely described by the particle densities n, p, minimizing the energy where G1,2 are strictly convex real valued functions, . It is shown that this variational problem has a unique minimizer in and some regularity results are proven. The semi-classical limit is carried out recovering the minimizer of the limiting functional. The subsequent zero space charge limit leads to extensions of the classical boundary conditions. Due to the lack of regularity the asymptotics can not be settled on Sobolev embedding arguments. The limit is carried out by means of a compactness-by-convexity principle.

  11. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  12. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  13. On the structure of self-affine convex bodies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voynov, A S

    2013-08-31

    We study the structure of convex bodies in R{sup d} that can be represented as a union of their affine images with no common interior points. Such bodies are called self-affine. Vallet's conjecture on the structure of self-affine bodies was proved for d = 2 by Richter in 2011. In the present paper we disprove the conjecture for all d≥3 and derive a detailed description of self-affine bodies in R{sup 3}. Also we consider the relation between properties of self-affine bodies and functional equations with a contraction of an argument. Bibliography: 10 titles.

  14. Analytical solutions with Generalized Impedance Boundary Conditions (GIBC)

    NASA Technical Reports Server (NTRS)

    Syed, H. H.; Volakis, John L.

    1991-01-01

    Rigorous uniform geometrical theory of diffraction (UTD) diffraction coefficients are presented for a coated convex cylinder simulated with generalized impedance boundary conditions. In particular, ray solutions are obtained which remain valid in the transition region and reduce uniformly to those in the deep lit and shadow regions. These involve new transition functions in place of the usual Fock-type integrals, characteristics to the impedance cylinder. A uniform asymptotic solution is also presented for observations in the close vicinity of the cylinder. The diffraction coefficients for the convex cylinder are obtained via a generalization of the corresponding ones for the circular cylinder.

  15. Goldstone models of modified gravity

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Valageas, Patrick

    2017-02-01

    We investigate scalar-tensor theories where matter couples to the scalar field via a kinetically dependent conformal coupling. These models can be seen as the low-energy description of invariant field theories under a global Abelian symmetry. The scalar field is then identified with the Goldstone mode of the broken symmetry. It turns out that the properties of these models are very similar to the ones of ultralocal theories where the scalar-field value is directly determined by the local matter density. This leads to a complete screening of the fifth force in the Solar System and between compact objects, through the ultralocal screening mechanism. On the other hand, the fifth force can have large effects in extended structures with large-scale density gradients, such as galactic halos. Interestingly, it can either amplify or damp Newtonian gravity, depending on the model parameters. We also study the background cosmology and the linear cosmological perturbations. The background cosmology is hardly different from its Λ -CDM counterpart while cosmological perturbations crucially depend on whether the coupling function is convex or concave. For concave functions, growth is hindered by the repulsiveness of the fifth force while it is enhanced in the convex case. In both cases, the departures from the Λ -CDM cosmology increase on smaller scales and peak for galactic structures. For concave functions, the formation of structure is largely altered below some characteristic mass, as smaller structures are delayed and would form later through fragmentation, as in some warm dark matter scenarios. For convex models, small structures form more easily than in the Λ -CDM scenario. This could lead to an over-abundance of small clumps. We use a thermodynamic analysis and show that although convex models have a phase transition between homogeneous and inhomogeneous phases, on cosmological scales the system does not enter the inhomogeneous phase. On the other hand, for galactic halos, the coexistence of small and large substructures in their outer regions could lead to observational signatures of these models.

  16. CONVEX mini manual

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.

  17. Processing convexity and concavity along a 2-D contour: figure-ground, structural shape, and attention.

    PubMed

    Bertamini, Marco; Wagemans, Johan

    2013-04-01

    Interest in convexity has a long history in vision science. For smooth contours in an image, it is possible to code regions of positive (convex) and negative (concave) curvature, and this provides useful information about solid shape. We review a large body of evidence on the role of this information in perception of shape and in attention. This includes evidence from behavioral, neurophysiological, imaging, and developmental studies. A review is necessary to analyze the evidence on how convexity affects (1) separation between figure and ground, (2) part structure, and (3) attention allocation. Despite some broad agreement on the importance of convexity in these areas, there is a lack of consensus on the interpretation of specific claims--for example, on the contribution of convexity to metric depth and on the automatic directing of attention to convexities or to concavities. The focus is on convexity and concavity along a 2-D contour, not convexity and concavity in 3-D, but the important link between the two is discussed. We conclude that there is good evidence for the role of convexity information in figure-ground organization and in parsing, but other, more specific claims are not (yet) well supported.

  18. Mathematical analysis on the cosets of subgroup in the group of E-convex sets

    NASA Astrophysics Data System (ADS)

    Abbas, Nada Mohammed; Ajeena, Ruma Kareem K.

    2018-05-01

    In this work, analyzing the cosets of the subgroup in the group of L – convex sets is presented as a new and powerful tool in the topics of the convex analysis and abstract algebra. On L – convex sets, the properties of these cosets are proved mathematically. Most important theorem on a finite group of L – convex sets theory which is the Lagrange’s Theorem has been proved. As well as, the mathematical proof of the quotient group of L – convex sets is presented.

  19. Direct single-layered fabrication of 3D concavo convex patterns in nano-stereolithography

    NASA Astrophysics Data System (ADS)

    Lim, T. W.; Park, S. H.; Yang, D. Y.; Kong, H. J.; Lee, K. S.

    2006-09-01

    A nano-surfacing process (NSP) is proposed to directly fabricate three-dimensional (3D) concavo convex-shaped microstructures such as micro-lens arrays using two-photon polymerization (TPP), a promising technique for fabricating arbitrary 3D highly functional micro-devices. In TPP, commonly utilized methods for fabricating complex 3D microstructures to date are based on a layer-by-layer accumulating technique employing two-dimensional sliced data derived from 3D computer-aided design data. As such, this approach requires much time and effort for precise fabrication. In this work, a novel single-layer exposure method is proposed in order to improve the fabricating efficiency for 3D concavo convex-shaped microstructures. In the NSP, 3D microstructures are divided into 13 sub-regions horizontally with consideration of the heights. Those sub-regions are then expressed as 13 characteristic colors, after which a multi-voxel matrix (MVM) is composed with the characteristic colors. Voxels with various heights and diameters are generated to construct 3D structures using a MVM scanning method. Some 3D concavo convex-shaped microstructures were fabricated to estimate the usefulness of the NSP, and the results show that it readily enables the fabrication of single-layered 3D microstructures.

  20. Group Variable Selection Via Convex Log-Exp-Sum Penalty with Application to a Breast Cancer Survivor Study

    PubMed Central

    Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace

    2017-01-01

    Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196

  1. Conditions for monogamy of quantum correlations in multipartite systems

    NASA Astrophysics Data System (ADS)

    Kumar, Asutosh

    2016-09-01

    Monogamy of quantum correlations is a vibrant area of research because of its potential applications in several areas in quantum information ranging from quantum cryptography to co-operative phenomena in many-body physics. In this paper, we investigate conditions under which monogamy is preserved for functions of quantum correlation measures. We prove that a monogamous measure remains monogamous on raising its power, and a non-monogamous measure remains non-monogamous on lowering its power. We also prove that monogamy of a convex quantum correlation measure for arbitrary multipartite pure quantum state leads to its monogamy for mixed states in the same Hilbert space. Monogamy of squared negativity for mixed states and that of entanglement of formation follow as corollaries of our results.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  3. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  4. Delivering Sound Energy along an Arbitrary Convex Trajectory

    PubMed Central

    Zhao, Sipei; Hu, Yuxiang; Lu, Jing; Qiu, Xiaojun; Cheng, Jianchun; Burnett, Ian

    2014-01-01

    Accelerating beams have attracted considerable research interest due to their peculiar properties and various applications. Although there have been numerous research on the generation and application of accelerating light beams, few results have been published on the generation of accelerating acoustic beams. Here we report on the experimental observation of accelerating acoustic beams along arbitrary convex trajectories. The desired trajectory is projected to the spatial phase profile on the boundary which is discretized and sampled spatially. The sound field distribution is formulated with the Green function and the integral equation method. Both the paraxial and the non-paraxial regimes are examined and observed in the experiments. The effect of obstacle scattering in the sound field is also investigated and the results demonstrate that the approach is robust against obstacle scattering. The realization of accelerating acoustic beams will have an impact on various applications where acoustic information and energy are required to be delivered along an arbitrary convex trajectory. PMID:25316353

  5. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  6. Scoliosis convexity and organ anatomy are related.

    PubMed

    Schlösser, Tom P C; Semple, Tom; Carr, Siobhán B; Padley, Simon; Loebinger, Michael R; Hogg, Claire; Castelein, René M

    2017-06-01

    Primary ciliary dyskinesia (PCD) is a respiratory syndrome in which 'random' organ orientation can occur; with approximately 46% of patients developing situs inversus totalis at organogenesis. The aim of this study was to explore the relationship between organ anatomy and curve convexity by studying the prevalence and convexity of idiopathic scoliosis in PCD patients with and without situs inversus. Chest radiographs of PCD patients were systematically screened for existence of significant lateral spinal deviation using the Cobb angle. Positive values represented right-sided convexity. Curve convexity and Cobb angles were compared between PCD patients with situs inversus and normal anatomy. A total of 198 PCD patients were screened. The prevalence of scoliosis (Cobb >10°) and significant spinal asymmetry (Cobb 5-10°) was 8 and 23%, respectively. Curve convexity and Cobb angle were significantly different within both groups between situs inversus patients and patients with normal anatomy (P ≤ 0.009). Moreover, curve convexity correlated significantly with organ orientation (P < 0.001; ϕ = 0.882): In 16 PCD patients with scoliosis (8 situs inversus and 8 normal anatomy), except for one case, matching of curve convexity and orientation of organ anatomy was observed: convexity of the curve was opposite to organ orientation. This study supports our hypothesis on the correlation between organ anatomy and curve convexity in scoliosis: the convexity of the thoracic curve is predominantly to the right in PCD patients that were 'randomized' to normal organ anatomy and to the left in patients with situs inversus totalis.

  7. Use of Convexity in Ostomy Care

    PubMed Central

    Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel

    2017-01-01

    Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes. PMID:28002174

  8. Geometric convex cone volume analysis

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Chang, Chein-I.

    2016-05-01

    Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.

  9. FMCSA’s advanced system testing utilizing a data acquisition system on the highways (FAST DASH) safety technology evaluation project #3 : novel convex mirrors : technology brief.

    DOT National Transportation Integrated Search

    2016-11-01

    The Federal Motor Carrier Safety Administration (FMCSA) established the FAST DASH program to perform efficient independent evaluations of promising safety technologies aimed at commercial vehicle operations. In this third FAST DASH safety technology ...

  10. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    NASA Astrophysics Data System (ADS)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  11. High frequency scattering by a smooth coated cylinder simulated with generalized impedance boundary conditions

    NASA Technical Reports Server (NTRS)

    Syed, Hasnain H.; Volakis, John L.

    1991-01-01

    Rigorous uniform geometrical theory of diffraction (UGTD) diffraction coefficients are presented for a coated convex cylinder simulated with generalized impedance boundary conditions. In particular, ray solutions are obtained which remain valid in the transition region and reduce uniformly to those in the deep lit and shadow regions. These involve new transition functions in place of the usual Fock-type integrals, characteristic to the impedance cylinder. A uniform asymptotic solution is also presented for observations in the close vicinity of the cylinder. As usual, the diffraction coefficients for the convex cylinder are obtained via a generalization of the corresponding ones for the circular cylinder.

  12. Behavior of turbulent boundary layers on curved convex walls

    NASA Technical Reports Server (NTRS)

    Schmidbauer, Hans

    1936-01-01

    The system of linear differential equations which indicated the approach of separation and the so-called "boundary-layer thickness" by Gruschwitz is extended in this report to include the case where the friction layer is subject to centrifugal forces. Evaluation of the data yields a strong functional dependence of the momentum change and wall drag on the boundary-layer thickness radius of curvature ratio for the wall. It is further shown that the transition from laminar to turbulent flow occurs at somewhat higher Reynolds Numbers at the convex wall than at the flat plate, due to the stabilizing effect of the centrifugal forces.

  13. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Revisiting separation properties of convex fuzzy sets

    USDA-ARS?s Scientific Manuscript database

    Separation of convex sets by hyperplanes has been extensively studied on crisp sets. In a seminal paper separability and convexity are investigated, however there is a flaw on the definition of degree of separation. We revisited separation on convex fuzzy sets that have level-wise (crisp) disjointne...

  15. Use of Convexity in Ostomy Care: Results of an International Consensus Meeting.

    PubMed

    Hoeflok, Jo; Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel

    Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes.

  16. Detection of Convexity and Concavity in Context

    ERIC Educational Resources Information Center

    Bertamini, Marco

    2008-01-01

    Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…

  17. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  18. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  19. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    NASA Astrophysics Data System (ADS)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  20. Airfoil

    DOEpatents

    Ristau, Neil; Siden, Gunnar Leif

    2015-07-21

    An airfoil includes a leading edge, a trailing edge downstream from the leading edge, a pressure surface between the leading and trailing edges, and a suction surface between the leading and trailing edges and opposite the pressure surface. A first convex section on the suction surface decreases in curvature downstream from the leading edge, and a throat on the suction surface is downstream from the first convex section. A second convex section is on the suction surface downstream from the throat, and a first convex segment of the second convex section increases in curvature.

  1. A lab-on-phone instrument with varifocal microscope via a liquid-actuated aspheric lens (LAL)

    PubMed Central

    Fuh, Yiin-Kuen; Lai, Zheng-Hong; Kau, Li-Han; Huang, Hung-Jui

    2017-01-01

    In this paper, we introduce a novel concept of liquid-actuated aspheric lens (LAL) with a built-in aspheric polydimethylsiloxane lens (APL) to enable the design of compact optical systems with varifocal microscopic imaging. The varifocal lens module consists of a sandwiched structures such as 3d printed syringe pump functionally serves as liquid controller. Other key components include two acrylic cylinders, a rigid separator, a APL/membrane composite (APLMC) embedded PDMS membrane. In functional operation, the fluidic controller was driven to control the pressure difference and ALPMC deformation. The focal length can be changed through the pressure difference. This is achieved by the adjustment of volume change of injected liquid such that a widely tunable focal length. The proposed LAL can transform to 3 modes: microscopic mode (APLMC only), convex-concave mode and biconcave mode. It is noticeable that LAL in the operation of microscopic mode is tunable in focus via the actuation of APLMC (focal length is from 4.3 to 2.3 mm and magnification 50X) and can rival the images quality of commercial microscopes. A new lab-on-phone device is economically feasible and functionally versatile to offer a great potential in the point of care applications. PMID:28650971

  2. Space ultra-vacuum facility and method of operation

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J. (Inventor)

    1986-01-01

    A wake shield facility providing an ultrahigh vacuum level for space processing is described. The facility is in the shape of a truncated, hollow hemispherical section, one side of the shield convex and the other concave. The shield surface is preferably made of material that has low out-gassing characteristics such as stainless steel. A material sample supporting fixture in the form of a carousel is disposed on the convex side of the shield at its apex. Movable arms, also on the convex side, are connected by the shield in proximity to the carousel, the arms supporting processing fixtures, and providing for movement of the fixtures to predetermined locations required for producing interations with material samples. For MBE processes a vapor jet projects a stream of vaporized material onto a sample surface. The fixtures are oriented to face the surface of the sample being processed when in their extended position, and when not in use they are retractable to a storage position. The concave side of the shield has a support structure including metal struts connected to the shield, extending radially inward. The struts are joined to an end plate disposed parallel to the outer edge of the shield. This system eliminates outgassing contamination.

  3. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  4. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  5. Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming.

    DTIC Science & Technology

    1982-12-21

    and W. T. ZIEMBA (1981). Intro- duction to concave and generalized concave functions. In Gener- alized Concavity in Optimization and Economics (S...Schaible and W. T. Ziemba , eds.), pp. 21-50. Academic Press, New York. BANK, B., J. GUDDAT, D. KLATTE, B. KUMMER, and K. TAMMER (1982). Non- Linear

  6. Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr

    2016-04-18

    The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.

  7. A Convex Hull-Based New Metric for Quantification of Bladder Wall Irregularity in Pediatric Patients With Congenital Anomalies of the Kidney and Urinary Tract.

    PubMed

    Stember, Joseph N; Newhouse, Jeffrey; Behr, Gerald; Alam, Shumyle

    2017-11-01

    Early identification and quantification of bladder damage in pediatric patients with congenital anomalies of the kidney and urinary tract (CAKUT) is crucial to guiding effective treatment and may affect the eventual clinical outcome, including progression of renal disease. We have developed a novel approach based on the convex hull to calculate bladder wall trabecularity in pediatric patients with CAKUT. The objective of this study was to test whether our approach can accurately predict bladder wall irregularity. Twenty pediatric patients, half with renal compromise and CAKUT and half with normal renal function, were evaluated. We applied the convex hull approach to calculate T, a metric proposed to reflect the degree of trabeculation/bladder wall irregularity, in this set of patients. The average T value was roughly 3 times higher for diseased than healthy patients (0.14 [95% confidence interval, 0.10-0.17] versus 0.05 [95% confidence interval, 0.03-0.07] for normal bladders). This disparity was statistically significant (P < .01). We have demonstrated that a convex hull-based procedure can measure bladder wall irregularity. Because bladder damage is a reversible precursor to irreversible renal parenchymal damage, applying such a measure to at-risk pediatric patients can help guide prompt interventions to avert disease progression. © 2017 by the American Institute of Ultrasound in Medicine.

  8. A Bayesian observer replicates convexity context effects in figure-ground perception.

    PubMed

    Goldreich, Daniel; Peterson, Mary A

    2012-01-01

    Peterson and Salvagio (2008) demonstrated convexity context effects in figure-ground perception. Subjects shown displays consisting of unfamiliar alternating convex and concave regions identified the convex regions as foreground objects progressively more frequently as the number of regions increased; this occurred only when the concave regions were homogeneously colored. The origins of these effects have been unclear. Here, we present a two-free-parameter Bayesian observer that replicates convexity context effects. The Bayesian observer incorporates two plausible expectations regarding three-dimensional scenes: (1) objects tend to be convex rather than concave, and (2) backgrounds tend (more than foreground objects) to be homogeneously colored. The Bayesian observer estimates the probability that a depicted scene is three-dimensional, and that the convex regions are figures. It responds stochastically by sampling from its posterior distributions. Like human observers, the Bayesian observer shows convexity context effects only for images with homogeneously colored concave regions. With optimal parameter settings, it performs similarly to the average human subject on the four display types tested. We propose that object convexity and background color homogeneity are environmental regularities exploited by human visual perception; vision achieves figure-ground perception by interpreting ambiguous images in light of these and other expected regularities in natural scenes.

  9. Synchronization Control of Neural Networks With State-Dependent Coefficient Matrices.

    PubMed

    Zhang, Junfeng; Zhao, Xudong; Huang, Jun

    2016-11-01

    This brief is concerned with synchronization control of a class of neural networks with state-dependent coefficient matrices. Being different from the existing drive-response neural networks in the literature, a novel model of drive-response neural networks is established. The concepts of uniformly ultimately bounded (UUB) synchronization and convex hull Lyapunov function are introduced. Then, by using the convex hull Lyapunov function approach, the UUB synchronization design of the drive-response neural networks is proposed, and a delay-independent control law guaranteeing the bounded synchronization of the neural networks is constructed. All present conditions are formulated in terms of bilinear matrix inequalities. By comparison, it is shown that the neural networks obtained in this brief are less conservative than those ones in the literature, and the bounded synchronization is suitable for the novel drive-response neural networks. Finally, an illustrative example is given to verify the validity of the obtained results.

  10. Dwell time-based stabilisation of switched delay systems using free-weighting matrices

    NASA Astrophysics Data System (ADS)

    Koru, Ahmet Taha; Delibaşı, Akın; Özbay, Hitay

    2018-01-01

    In this paper, we present a quasi-convex optimisation method to minimise an upper bound of the dwell time for stability of switched delay systems. Piecewise Lyapunov-Krasovskii functionals are introduced and the upper bound for the derivative of Lyapunov functionals is estimated by free-weighting matrices method to investigate non-switching stability of each candidate subsystems. Then, a sufficient condition for the dwell time is derived to guarantee the asymptotic stability of the switched delay system. Once these conditions are represented by a set of linear matrix inequalities , dwell time optimisation problem can be formulated as a standard quasi-convex optimisation problem. Numerical examples are given to illustrate the improvements over previously obtained dwell time bounds. Using the results obtained in the stability case, we present a nonlinear minimisation algorithm to synthesise the dwell time minimiser controllers. The algorithm solves the problem with successive linearisation of nonlinear conditions.

  11. A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation

    PubMed Central

    Zhang, Rui; Zhu, Shiping; Zhou, Qin

    2016-01-01

    Infrared image segmentation is a challenging topic because infrared images are characterized by high noise, low contrast, and weak edges. Active contour models, especially gradient vector flow, have several advantages in terms of infrared image segmentation. However, the GVF (Gradient Vector Flow) model also has some drawbacks including a dilemma between noise smoothing and weak edge protection, which decrease the effect of infrared image segmentation significantly. In order to solve this problem, we propose a novel generalized gradient vector flow snakes model combining GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models. We also adopt a new type of coefficients setting in the form of convex function to improve the ability of protecting weak edges while smoothing noises. Experimental results and comparisons against other methods indicate that our proposed snakes model owns better ability in terms of infrared image segmentation than other snakes models. PMID:27775660

  12. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  13. Decomposability and convex structure of thermal processes

    NASA Astrophysics Data System (ADS)

    Mazurek, Paweł; Horodecki, Michał

    2018-05-01

    We present an example of a thermal process (TP) for a system of d energy levels, which cannot be performed without an instant access to the whole energy space. This TP is uniquely connected with a transition between some states of the system, that cannot be performed without access to the whole energy space even when approximate transitions are allowed. Pursuing the question about the decomposability of TPs into convex combinations of compositions of processes acting non-trivially on smaller subspaces, we investigate transitions within the subspace of states diagonal in the energy basis. For three level systems, we determine the set of extremal points of these operations, as well as the minimal set of operations needed to perform an arbitrary TP, and connect the set of TPs with thermomajorization criterion. We show that the structure of the set depends on temperature, which is associated with the fact that TPs cannot increase deterministically extractable work from a state—the conclusion that holds for arbitrary d level system. We also connect the decomposability problem with detailed balance symmetry of an extremal TPs.

  14. Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms With Directed Gossip Communication

    NASA Astrophysics Data System (ADS)

    Jakovetic, Dusan; Xavier, João; Moura, José M. F.

    2011-08-01

    We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.

  15. A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING.

    PubMed

    Bingsheng, He; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming

    2014-07-01

    In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O (1/ t ). A worst-case O (1/ t ) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O (1/ t ) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.

  16. The role of convexity in perception of symmetry and in visual short-term memory.

    PubMed

    Bertamini, Marco; Helmy, Mai Salah; Hulleman, Johan

    2013-01-01

    Visual perception of shape is affected by coding of local convexities and concavities. For instance, a recent study reported that deviations from symmetry carried by convexities were easier to detect than deviations carried by concavities. We removed some confounds and extended this work from a detection of reflection of a contour (i.e., bilateral symmetry), to a detection of repetition of a contour (i.e., translational symmetry). We tested whether any convexity advantage is specific to bilateral symmetry in a two-interval (Experiment 1) and a single-interval (Experiment 2) detection task. In both, we found a convexity advantage only for repetition. When we removed the need to choose which region of the contour to monitor (Experiment 3) the effect disappeared. In a second series of studies, we again used shapes with multiple convex or concave features. Participants performed a change detection task in which only one of the features could change. We did not find any evidence that convexities are special in visual short-term memory, when the to-be-remembered features only changed shape (Experiment 4), when they changed shape and changed from concave to convex and vice versa (Experiment 5), or when these conditions were mixed (Experiment 6). We did find a small advantage for coding convexity as well as concavity over an isolated (and thus ambiguous) contour. The latter is consistent with the known effect of closure on processing of shape. We conclude that convexity plays a role in many perceptual tasks but that it does not have a basic encoding advantage over concavity.

  17. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  18. Convex foundations for generalized MaxEnt models

    NASA Astrophysics Data System (ADS)

    Frongillo, Rafael; Reid, Mark D.

    2014-12-01

    We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.

  19. An enhanced SOCP-based method for feeder load balancing using the multi-terminal soft open point in active distribution networks

    DOE PAGES

    Ji, Haoran; Wang, Chengshan; Li, Peng; ...

    2017-09-20

    The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less

  20. An enhanced SOCP-based method for feeder load balancing using the multi-terminal soft open point in active distribution networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Haoran; Wang, Chengshan; Li, Peng

    The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less

  1. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  2. First Evaluation of the New Thin Convex Probe Endobronchial Ultrasound Scope: A Human Ex Vivo Lung Study.

    PubMed

    Patel, Priya; Wada, Hironobu; Hu, Hsin-Pei; Hirohashi, Kentaro; Kato, Tatsuya; Ujiie, Hideki; Ahn, Jin Young; Lee, Daiyoon; Geddie, William; Yasufuku, Kazuhiro

    2017-04-01

    Endobronchial ultrasonography (EBUS)-guided transbronchial needle aspiration allows for sampling of mediastinal lymph nodes. The external diameter, rigidity, and angulation of the convex probe EBUS renders limited accessibility. This study compares the accessibility and transbronchial needle aspiration capability of the prototype thin convex probe EBUS against the convex probe EBUS in human ex vivo lungs rejected for transplant. The prototype thin convex probe EBUS (BF-Y0055; Olympus, Tokyo, Japan) with a thinner tip (5.9 mm), greater upward angle (170 degrees), and decreased forward oblique direction of view (20 degrees) was compared with the current convex probe EBUS (6.9-mm tip, 120 degrees, and 35 degrees, respectively). Accessibility and transbronchial needle aspiration capability was assessed in ex vivo human lungs declined for lung transplant. The distance of maximum reach and sustainable endoscopic limit were measured. Transbronchial needle aspiration capability was assessed using the prototype 25G aspiration needle in segmental lymph nodes. In all evaluated lungs (n = 5), the thin convex probe EBUS demonstrated greater reach and a higher success rate, averaging 22.1 mm greater maximum reach and 10.3 mm further endoscopic visibility range than convex probe EBUS, and could assess selectively almost all segmental bronchi (98% right, 91% left), demonstrating nearly twice the accessibility as the convex probe EBUS (48% right, 47% left). The prototype successfully enabled cytologic assessment of subsegmental lymph nodes with adequate quality using the dedicated 25G aspiration needle. Thin convex probe EBUS has greater accessibility to peripheral airways in human lungs and is capable of sampling segmental lymph nodes using the aspiration needle. That will allow for more precise assessment of N1 nodes and, possibly, intrapulmonary lesions normally inaccessible to the conventional convex probe EBUS. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Large aluminium convex mirror for the cryo-optical test of the Planck primary reflector

    NASA Astrophysics Data System (ADS)

    Gloesener, P.; Flébus, C.; Cola, M.; Roose, S.; Stockman, Y.; de Chambure, D.

    2017-11-01

    In the frame of the PLANCK mission telescope development, it is requested to measure the reflector changes of the surface figure error (SFE) with respect to the best ellipsoid, between 293 K and 50 K, with 1 μm RMS accuracy. To achieve this, Infra Red interferometry has been selected and a dedicated thermo mechanical set-up has been constructed. In order to realise the test set-up for this reflector, a large aluminium convex mirror with radius of 19500 mm has been manufactured. The mirror has to operate in a cryogenic environment lower than 30 K, and has a contribution to the RMS WFE with less than 1 μm between room temperature and cryogenic temperature. This paper summarises the design, manufacturing and characterisation of this mirror, showing it has fulfilled its requirements.

  4. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  5. Duality in non-linear programming

    NASA Astrophysics Data System (ADS)

    Jeyalakshmi, K.

    2018-04-01

    In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.

  6. Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video.

    PubMed

    Gunay, Osman; Toreyin, Behçet Ugur; Kose, Kivanc; Cetin, A Enis

    2012-05-01

    In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented.

  7. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  8. [Design method of convex master gratings for replicating flat-field concave gratings].

    PubMed

    Zhou, Qian; Li, Li-Feng

    2009-08-01

    Flat-field concave diffraction grating is the key device of a portable grating spectrometer with the advantage of integrating dispersion, focusing and flat-field in a single device. It directly determines the quality of a spectrometer. The most important two performances determining the quality of the spectrometer are spectral image quality and diffraction efficiency. The diffraction efficiency of a grating depends mainly on its groove shape. But it has long been a problem to get a uniform predetermined groove shape across the whole concave grating area, because the incident angle of the ion beam is restricted by the curvature of the concave substrate, and this severely limits the diffraction efficiency and restricts the application of concave gratings. The authors present a two-step method for designing convex gratings, which are made holographically with two exposure point sources placed behind a plano-convex transparent glass substrate, to solve this problem. The convex gratings are intended to be used as the master gratings for making aberration-corrected flat-field concave gratings. To achieve high spectral image quality for the replicated concave gratings, the refraction effect at the planar back surface and the extra optical path lengths through the substrate thickness experienced by the two divergent recording beams are considered during optimization. This two-step method combines the optical-path-length function method and the ZEMAX software to complete the optimization with a high success rate and high efficiency. In the first step, the optical-path-length function method is used without considering the refraction effect to get an approximate optimization result. In the second step, the approximate result of the first step is used as the initial value for ZEMAX to complete the optimization including the refraction effect. An example of design problem was considered. The simulation results of ZEMAX proved that the spectral image quality of a replicated concave grating is comparable with that of a directly recorded concave grating.

  9. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  10. Giant convexity chondroma with meningeal attachment.

    PubMed

    Feierabend, Denise; Maksoud, Salah; Lawson McLean, Aaron; Koch, Arend; Kalff, Rolf; Walter, Jan

    2018-06-01

    Intracranial chondroma is a rare and benign tumor with usual onset in young adulthood. The skull base is the most common site of occurrence although, less often, the tumors can appear at the falx cerebri or at the dural convexity. The differentiation of these lesions from meningiomas through imaging is generally difficult. Clinical case presentation and review of the current literature. We report a case of a 25-year-old male patient with a giant convexity chondroma with meningeal attachment in the right frontal lobe that was detected after a first generalized seizure. Based on the putative diagnosis of meningioma, the tumor was completely resected via an osteoplastic parasagittal craniotomy. The postoperative MRI confirmed the complete tumor resection. Histopathological analysis revealed the presence of a chondroma. Intracranial chondromas are a rarity and their preoperative diagnosis based on neuroimaging is difficult. In young patients and those with skeletal disease, the differential diagnosis of a chondroma should be considered. In symptomatic patients, operative resection is sensible. In most cases total removal of the tumor is possible and leads to full recovery. When the finding is merely incidental in older patients, a watchful waiting approach is acceptable, given the benign and slow-growing nature of the lesion. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Weighted mining of massive collections of [Formula: see text]-values by convex optimization.

    PubMed

    Dobriban, Edgar

    2018-06-01

    Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).

  12. Means and the Mean Value Theorem

    ERIC Educational Resources Information Center

    Merikoski, Jorma K.; Halmetoja, Markku; Tossavainen, Timo

    2009-01-01

    Let I be a real interval. We call a continuous function [mu] : I x I [right arrow] [Bold R] a proper mean if it is symmetric, reflexive, homogeneous, monotonic and internal. Let f : I [right arrow] [Bold R} be a differentiable and strictly convex or strictly concave function. If a, b [image omitted] I with a [not equal to] b, then there exists a…

  13. Effect of Deviated Nasal Septum Type on Nasal Mucociliary Clearance, Olfactory Function, Quality of Life, and Efficiency of Nasal Surgery.

    PubMed

    Berkiten, Güler; Kumral, Tolgar Lütfi; Saltürk, Ziya; Atar, Yavuz; Yildirim, Güven; Uyar, Yavuz; Aydoğdu, Imran; Arslanoğlu, Ahmet

    2016-07-01

    The aim of this study was to analyze the influence of deviated nasal septum (DNS) type on nasal mucociliary clearance, quality of life (QoL), olfactory function, and efficiency of nasal surgery (septoplasty with or without inferior turbinate reduction and partial middle turbinectomy). Fifty patients (20 females and 30 males) with septal deviation were included in the study and were divided into 6 groups according to deviation type after examination by nasal endoscopy and paranasal computed tomography. The saccharin clearance test to evaluate the nasal mucociliary clearance time, Connecticut Chemosensory Clinical Research Center smell test for olfactory function, and sinonasal outcome test-22 (SNOT-22) for patient satisfaction were applied preoperatively and postoperatively at the sixth week after surgery. Nasal mucociliary clearance, smell, and SNOT-22 scores were measured before surgery and at the sixth week following surgery. No significant difference was found in olfactory and SNOT-22 scores for any of the DNS types (both convex and concave sides) (P > 0.05). In addition, there was no difference in the saccharin clearance time (SCT) of the concave and convex sides (P > 0.05). According to the DNS type, the mean SCT of the convex sides showed no difference, but that of the concave sides showed a difference in types 3, 4, 5, and 6. These types had a prolonged SCT (P < 0.05). Olfactory scores revealed no difference postoperatively in types 5 and 6 but were decreased significantly in types 1 to 4 (P < 0.05). There was no significant difference in the healing of both the mucociliary clearance (MCC) and olfactory functions. SNOT-22 results showed a significant decrease in type 3. All DNS types disturb the QoL regarding nasal MCC and olfaction functions. MCC values, olfactory function, and QoL scores are similar among the DNS types. Both sides of the DNS types affect the MCC scores symmetrically. Septal surgery improves olfaction function and QoL at the sixth week following surgery but disturbs nasal MCC; thus, the sixth week is too early to assess nasal MCC.

  14. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  15. On flows of viscoelastic fluids under threshold-slip boundary conditions

    NASA Astrophysics Data System (ADS)

    Baranovskii, E. S.

    2018-03-01

    We investigate a boundary-value problem for the steady isothermal flow of an incompressible viscoelastic fluid of Oldroyd type in a 3D bounded domain with impermeable walls. We use the Fujita threshold-slip boundary condition. This condition states that the fluid can slip along a solid surface when the shear stresses reach a certain critical value; otherwise the slipping velocity is zero. Assuming that the flow domain is not rotationally symmetric, we prove an existence theorem for the corresponding slip problem in the framework of weak solutions. The proof uses methods for solving variational inequalities with pseudo-monotone operators and convex functionals, the method of introduction of auxiliary viscosity, as well as a passage-to-limit procedure based on energy estimates of approximate solutions, Korn’s inequality, and compactness arguments. Also, some properties and estimates of weak solutions are established.

  16. Some properties of the Catalan-Qi function related to the Catalan numbers.

    PubMed

    Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang

    2016-01-01

    In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.

  17. A STRICTLY CONTRACTIVE PEACEMAN–RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING

    PubMed Central

    BINGSHENG, HE; LIU, HAN; WANG, ZHAORAN; YUAN, XIAOMING

    2014-01-01

    In this paper, we focus on the application of the Peaceman–Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas–Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing. PMID:25620862

  18. Segmentation-based wavelet transform for still-image compression

    NASA Astrophysics Data System (ADS)

    Mozelle, Gerard; Seghier, Abdellatif; Preteux, Francoise J.

    1996-10-01

    In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.

  19. CVXPY: A Python-Embedded Modeling Language for Convex Optimization.

    PubMed

    Diamond, Steven; Boyd, Stephen

    2016-04-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.

  20. Building Energy Modeling and Control Methods for Optimization and Renewables Integration

    NASA Astrophysics Data System (ADS)

    Burger, Eric M.

    This dissertation presents techniques for the numerical modeling and control of building systems, with an emphasis on thermostatically controlled loads. The primary objective of this work is to address technical challenges related to the management of energy use in commercial and residential buildings. This work is motivated by the need to enhance the performance of building systems and by the potential for aggregated loads to perform load following and regulation ancillary services, thereby enabling the further adoption of intermittent renewable energy generation technologies. To increase the generalizability of the techniques, an emphasis is placed on recursive and adaptive methods which minimize the need for customization to specific buildings and applications. The techniques presented in this dissertation can be divided into two general categories: modeling and control. Modeling techniques encompass the processing of data streams from sensors and the training of numerical models. These models enable us to predict the energy use of a building and of sub-systems, such as a heating, ventilation, and air conditioning (HVAC) unit. Specifically, we first present an ensemble learning method for the short-term forecasting of total electricity demand in buildings. As the deployment of intermittent renewable energy resources continues to rise, the generation of accurate building-level electricity demand forecasts will be valuable to both grid operators and building energy management systems. Second, we present a recursive parameter estimation technique for identifying a thermostatically controlled load (TCL) model that is non-linear in the parameters. For TCLs to perform demand response services in real-time markets, online methods for parameter estimation are needed. Third, we develop a piecewise linear thermal model of a residential building and train the model using data collected from a custom-built thermostat. This model is capable of approximating unmodeled dynamics within a building by learning from sensor data. Control techniques encompass the application of optimal control theory, model predictive control, and convex distributed optimization to TCLs. First, we present the alternative control trajectory (ACT) representation, a novel method for the approximate optimization of non-convex discrete systems. This approach enables the optimal control of a population of non-convex agents using distributed convex optimization techniques. Second, we present a distributed convex optimization algorithm for the control of a TCL population. Experimental results demonstrate the application of this algorithm to the problem of renewable energy generation following. This dissertation contributes to the development of intelligent energy management systems for buildings by presenting a suite of novel and adaptable modeling and control techniques. Applications focus on optimizing the performance of building operations and on facilitating the integration of renewable energy resources.

  1. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less

  2. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    PubMed

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Trading strategies for distribution company with stochastic distributed energy resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chunyu; Wang, Qi; Wang, Jianhui

    2016-09-01

    This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less

  4. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  5. New insights in morphological analysis for managing activated sludge systems.

    PubMed

    Oliveira, Pedro; Alliet, Marion; Coufort-Saudejaud, Carole; Frances, Christine

    2018-06-01

    In activated sludge (AS) process, the impact of the operational parameters on process efficiency is assumed to be correlated with the sludge properties. This study provides a better insight into these interactions by subjecting a laboratory-scale AS system to a sequence of operating condition modifications enabling typical situations of a wastewater treatment plant to be represented. Process performance was assessed and AS floc morphology (size, circularity, convexity, solidity and aspect ratio) was quantified by measuring 100,000 flocs per sample with an automated image analysis technique. Introducing 3D distributions, which combine morphological properties, allowed the identification of a filamentous bulking characterized by a floc population shift towards larger sizes and lower solidity and circularity values. Moreover, a washout phenomenon was characterized by smaller AS flocs and an increase in their solidity. Recycle ratio increase and COD:N ratio decrease both promoted a slight reduction of floc sizes and a constant evolution of circularity and convexity values. The analysis of the volume-based 3D distributions turned out to be a smart tool to combine size and shape data, allowing a deeper understanding of the dynamics of floc structure under process disturbances.

  6. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  7. Duality of caustics in Minkowski billiards

    NASA Astrophysics Data System (ADS)

    Artstein-Avidan, S.; Florentin, D. I.; Ostrover, Y.; Rosen, D.

    2018-04-01

    In this paper we study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth, centrally symmetric, strictly convex body K, for every convex caustic which K possesses, the ‘dual’ billiard dynamics in which the table is the Euclidean unit ball and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics are dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic.

  8. A Perron-Frobenius type of theorem for quantum operations

    NASA Astrophysics Data System (ADS)

    Lagro, Matthew

    Quantum random walks are a generalization of classical Markovian random walks to a quantum mechanical or quantum computing setting. Quantum walks have promising applications but are complicated by quantum decoherence. We prove that the long-time limiting behavior of the class of quantum operations which are the convex combination of norm one operators is governed by the eigenvectors with norm one eigenvalues which are shared by the operators. This class includes all operations formed by a coherent operation with positive probability of orthogonal measurement at each step. We also prove that any operation that has range contained in a low enough dimension subspace of the space of density operators has limiting behavior isomorphic to an associated Markov chain. A particular class of such operations are coherent operations followed by an orthogonal measurement. Applications of the convergence theorems to quantum walks are given.

  9. Virial Coefficients and Equations of State for Hard Polyhedron Fluids.

    PubMed

    Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C

    2017-10-24

    Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.

  10. Multi-Stage Convex Relaxation Methods for Machine Learning

    DTIC Science & Technology

    2013-03-01

    Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.

  11. Assessing the influence of lower facial profile convexity on perceived attractiveness in the orthognathic patient, clinician, and layperson.

    PubMed

    Naini, Farhad B; Donaldson, Ana Nora A; McDonald, Fraser; Cobourne, Martyn T

    2012-09-01

    The aim was a quantitative evaluation of how the severity of lower facial profile convexity influences perceived attractiveness. The lower facial profile of an idealized image was altered incrementally between 14° to -16°. Images were rated on a Likert scale by orthognathic patients, laypeople, and clinicians. Attractiveness ratings were greater for straight profiles in relation to convex/concave, with no significant difference between convex and concave profiles. Ratings decreased by 0.23 of a level for every degree increase in the convexity angle. Class II/III patients gave significantly reduced ratings of attractiveness and had greater desire for surgery than class I. A straight profile is perceived as most attractive and greater degrees of convexity or concavity deemed progressively less attractive, but a range of 10° to -12° may be deemed acceptable; beyond these values surgical correction is desired. Patients are most critical, and clinicians are more critical than laypeople. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. The spectral positioning algorithm of new spectrum vehicle based on convex programming in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Lu, Zhixin

    2017-10-01

    Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.

  13. Decompositions of Multiattribute Utility Functions Based on Convex Dependence.

    DTIC Science & Technology

    1982-03-01

    School of Business, 200E, BEB Decision Research University of Texas at Austin 1201 Oak Street Austin, Texas 78712 Eugene, Oregon 97401 Professor Norman ...Stephen M. Robinson Dept. of Industrial Engineering Dr. Richard D. Smallwood Univ. of Wisconsin, Madison Applied Decision Analysis, Inc. 1513 University

  14. Robust, Adaptive Radar Detection and Estimation

    DTIC Science & Technology

    2015-07-21

    cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched

  15. Marginal Cost Pricing in a World without Perfect Competition: Implications for Electricity Markets with High Shares of Low Marginal Cost Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.; Clark, Kara; Bloom, Aaron P.

    A common approach to regulating electricity is through auction-based competitive wholesale markets. The goal of this approach is to provide a reliable supply of power at the lowest reasonable cost to the consumer. This necessitates market structures and operating rules that ensure revenue sufficiency for all generators needed for resource adequacy purposes. Wholesale electricity markets employ marginal-cost pricing to provide cost-effective dispatch such that resources are compensated for their operational costs. However, marginal-cost pricing alone cannot guarantee cost recovery outside of perfect competition, and electricity markets have at least six attributes that preclude them from functioning as perfectly competitive markets.more » These attributes include market power, externalities, public good attributes, lack of storage, wholesale price caps, and ineffective demand curve. Until (and unless) these failures are ameliorated, some form of corrective action(s) will be necessary to improve market efficiency so that prices can correctly reflect the needed level of system reliability. Many of these options necessarily involve some form of administrative or out-of-market actions, such as scarcity pricing, capacity payments, bilateral or other out-of-market contracts, or some hybrid combination. A key focus with these options is to create a connection between the electricity market and long-term reliability/loss-of-load expectation targets, which are inherently disconnected in the native markets because of the aforementioned market failures. The addition of variable generation resources can exacerbate revenue sufficiency and resource adequacy concerns caused by these underlying market failures. Because variable generation resources have near-zero marginal costs, they effectively suppress energy prices and reduce the capacity factors of conventional generators through the merit-order effect in the simplest case of a convex market; non-convexities can also suppress prices.« less

  16. Robust H∞ cost guaranteed integral sliding mode control for the synchronization problem of nonlinear tele-operation system with variable time-delay.

    PubMed

    Al-Wais, Saba; Khoo, Suiyang; Lee, Tae Hee; Shanmugam, Lakshmanan; Nahavandi, Saeid

    2018-01-01

    This paper is devoted to the synchronization problem of tele-operation systems with time-varying delay, disturbances, and uncertainty. Delay-dependent sufficient conditions for the existence of integral sliding surfaces are given in the form of Linear Matrix Inequalities (LMIs). This guarantees the global stability of the tele-operation system with known upper bounds of the time-varying delays. Unlike previous work, in this paper, the controller gains are designed but not chosen, which increases the degree of freedom of the design. Moreover, Wirtinger based integral inequality and reciprocally convex combination techniques used in the constructed Lypunove-Krasoviskii Functional (LKF) are deemed to give less conservative stability condition for the system. Furthermore, to relax the analysis from any assumptions regarding the dynamics of the environment and human operator forces, H ∞ design method is used to involve the dynamics of these forces and ensure the stability of the system against these admissible forces in the H ∞ sense. This design scheme combines the strong robustness of the sliding mode control with the H ∞ design method for tele-operation systems which is coupled using state feedback controllers and inherit variable time-delays in their communication channels. Simulation examples are given to show the effectiveness of the proposed method. Copyright © 2017 ISA. All rights reserved.

  17. Probabilistic Guidance of Swarms using Sequential Convex Programming

    DTIC Science & Technology

    2014-01-01

    quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using

  18. Rapid figure-ground responses to stereograms reveal an advantage for a convex foreground.

    PubMed

    Bertamini, Marco; Lawson, Rebecca

    2008-01-01

    Convexity has long been recognised as a factor that affects figure - ground segmentation, even when pitted against other factors such as symmetry [Kanizsa and Gerbino, 1976 Art and Artefacts Ed.M Henle (New York: Springer) pp 25-32]. It is accepted in the literature that the difference between concave and convex contours is important for the visual system, and that there is a prior expectation favouring convexities as figure. We used bipartite stimuli and a simple task in which observers had to report whether the foreground was on the left or the right. We report objective evidence that supports the idea that convexity affects figure-ground assignment, even though our stimuli were not pictorial in that depth order was specified unambiguously by binocular disparity.

  19. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  20. Curved VPH gratings for novel spectrographs

    NASA Astrophysics Data System (ADS)

    Clemens, J. Christopher; O'Donoghue, Darragh; Dunlap, Bart H.

    2014-07-01

    The introduction of volume phase holographic (VPH) gratings into astronomy over a decade ago opened new possibilities for instrument designers. In this paper we describe an extension of VPH grating technology that will have applications in astronomy and beyond: curved VPH gratings. These devices can disperse light while simultaneously correcting aberrations. We have designed and manufactured two different kinds of convex VPH grating prototypes for use in off-axis reflecting spectrographs. One type functions in transmission and the other in reflection, enabling Offnerstyle spectrographs with the high-efficiency and low-cost advantages of VPH gratings. We will discuss the design process and the tools required for modelling these gratings along with the recording layout and process steps required to fabricate them. We will present performance data for the first convex VPH grating produced for an astronomical spectrograph.

  1. Control of propagation of spatially localized polariton wave packets in a Bragg mirror with embedded quantum wells

    NASA Astrophysics Data System (ADS)

    Sedova, I. E.; Chestnov, I. Yu.; Arakelian, S. M.; Kavokin, A. V.; Sedov, E. S.

    2018-01-01

    We considered the nonlinear dynamics of Bragg polaritons in a specially designed stratified semiconductor structure with embedded quantum wells, which possesses a convex dispersion. The model for the ensemble of single periodically arranged quantum wells coupled with the Bragg photon fields has been developed. In particular, the generalized Gross-Pitaevskii equation with the non-parabolic dispersion has been obtained for the Bragg polariton wave function. We revealed a number of dynamical regimes for polariton wave packets resulting from competition of the convex dispersion and the repulsive nonlinearity effects. Among the regimes are spreading, breathing and soliton propagation. When the control parameters including the exciton-photon detuning, the matter-field coupling and the nonlinearity are manipulated, the dynamical regimes switch between themselves.

  2. Compressible Navier-Stokes Equations in a Polyhedral Cylinder with Inflow Boundary Condition

    NASA Astrophysics Data System (ADS)

    Kwon, Ohsung; Kweon, Jae Ryong

    2018-06-01

    In this paper our concern is with singularity and regularity of the compressible flows through a non-convex edge in R^3. The flows are governed by the compressible Navies-Stokes equations on the infinite cylinder that has the non-convex edge on the inflow boundary. We split the edge singularity by the Poisson problem from the velocity vector and show that the remainder is twice differentiable while the edge singularity is observed to be propagated into the interior of the cylinder by the transport character of the continuity equation. An interior surface layer starting at the edge is generated and not Lipshitz continuous due to the singularity. The density function shows a very steep change near the interface and its normal derivative has a jump discontinuity across there.

  3. A water management decision support system contributing to sustainability

    NASA Astrophysics Data System (ADS)

    Horváth, Klaudia; van Esch, Bart; Baayen, Jorn; Pothof, Ivo; Talsma, Jan; van Heeringen, Klaas-Jan

    2017-04-01

    Deltares and Eindhoven University of Technology are developing a new decision support system (DSS) for regional water authorities. In order to maintain water levels in the Dutch polder system, water should be drained and pumped out from the polders to the sea. The time and amount of pumping depends on the current sea level, the water level in the polder, the weather forecast and the electricity price forecast and possibly local renewable power production. This is a multivariable optimisation problem, where the goal is to keep the water level in the polder within certain bounds. By optimizing the operation of the pumps the energy usage and costs can be reduced, hence the operation of the regional water authorities can be more sustainable, while also anticipating on increasing share of renewables in the energy mix in a cost-effective way. The decision support system, based on Delft-FEWS as operational data-integration platform, is running an optimization model built in RTC-Tools 2, which is performing real-time optimization in order to calculate the pumping strategy. It is taking into account the present and future circumstances. As being the core of the real time decision support system, RTC-Tools 2 fulfils the key requirements to a DSS: it is fast, robust and always finds the optimal solution. These properties are associated with convex optimization. In such problems the global optimum can always be found. The challenge in the development is to maintain the convex formulation of all the non-linear components in the system, i.e. open channels, hydraulic structures, and pumps. The system is introduced through 4 pilot projects, one of which is a pilot of the Dutch Water Authority Rivierenland. This is a typical Dutch polder system: several polders are drained to the main water system, the Linge. The water from the Linge can be released to the main rivers that are subject to tidal fluctuations. In case of low tide, water can be released via the gates. In case of high tide, water should be pumped. The goal of the pilot is to make the operation of the regional water authority more sustainable and cost-efficient. Sustainability can be achieved by minimizing the CO2 production trough minimizing the energy used for pumping. This work is showing the functionalities of the new decision support system, using RTC-Tools 2, through the example of a pilot project.

  4. The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities

    NASA Astrophysics Data System (ADS)

    Cain, George L., Jr.; González, Luis

    2008-02-01

    The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.

  5. On Monotone Embedding in Information Geometry (Open Access)

    DTIC Science & Technology

    2015-06-25

    the non-parametric ( infinite - dimensional ) setting, as well [4,6], with the α-connection structure cast in a more general way. Theorem 1 of [4] gives... the weighting function for taking the expectation of random variables in calculating the Riemannian metric (G = 1 reduces to F - geometry , with the ...is a trivial rewriting of the convex function f used by [2]. This paper will start in Section 1

  6. Probing the interactions of phenol with oxygenated functional groups on curved fullerene-like sheets in activated carbon.

    PubMed

    Yin, Chun-Yang; Ng, Man-Fai; Goh, Bee-Min; Saunders, Martin; Hill, Nick; Jiang, Zhong-Tao; Balach, Juan; El-Harbawi, Mohanad

    2016-02-07

    The mechanism(s) of interactions of phenol with oxygenated functional groups (OH, COO and COOH) in nanopores of activated carbon (AC) is a contentious issue among researchers. This mechanism is of particular interest because a better understanding of the role of such groups in nanopores would essentially translate to advances in AC production and use, especially in regard to the treatment of organic-based wastewaters. We therefore attempt to shed more light on the subject by employing density functional theory (DFT) calculations in which fullerene-like models integrating convex or concave structure, which simulate the eclectic porous structures on AC surface, are adopted. TEM analysis, EDS mapping and Boehm titration are also conducted on actual phenol-adsorbed AC. Our results suggest the widely-reported phenomenon of decreased phenol uptake on AC due to increased concentration of oxygenated functional groups is possibly attributed to the increased presence of the latter on the convex side of the curved carbon sheets. Such a system effectively inhibits phenol from getting direct contact with the carbon sheet, thus constraining any available π-π interaction, while the effect of groups acting on the concave part of the curved sheet does not impart the same detriment.

  7. On Using Homogeneous Polynomials To Design Anisotropic Yield Functions With Tension/Compression Symmetry/Assymetry

    NASA Astrophysics Data System (ADS)

    Soare, S.; Yoon, J. W.; Cazacu, O.

    2007-05-01

    With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stress states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.

  8. Medial-lateral organization of the orbitofrontal cortex.

    PubMed

    Rich, Erin L; Wallis, Jonathan D

    2014-07-01

    Emerging evidence suggests that specific cognitive functions localize to different subregions of OFC, but the nature of these functional distinctions remains unclear. One prominent theory, derived from human neuroimaging, proposes that different stimulus valences are processed in separate orbital regions, with medial and lateral OFC processing positive and negative stimuli, respectively. Thus far, neurophysiology data have not supported this theory. We attempted to reconcile these accounts by recording neural activity from the full medial-lateral extent of the orbital surface in monkeys receiving rewards and punishments via gain or loss of secondary reinforcement. We found no convincing evidence for valence selectivity in any orbital region. Instead, we report differences between neurons in central OFC and those on the inferior-lateral orbital convexity, in that they encoded different sources of value information provided by the behavioral task. Neurons in inferior convexity encoded the value of external stimuli, whereas those in OFC encoded value information derived from the structure of the behavioral task. We interpret these results in light of recent theories of OFC function and propose that these distinctions, not valence selectivity, may shed light on a fundamental organizing principle for value processing in orbital cortex.

  9. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  10. The Band around a Convex Body

    ERIC Educational Resources Information Center

    Swanson, David

    2011-01-01

    We give elementary proofs of formulas for the area and perimeter of a planar convex body surrounded by a band of uniform thickness. The primary tool is a integral formula for the perimeter of a convex body which describes the perimeter in terms of the projections of the body onto lines in the plane.

  11. A path following algorithm for the graph matching problem.

    PubMed

    Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe

    2009-12-01

    We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

  12. Attoliter Control of Microliquid

    NASA Astrophysics Data System (ADS)

    Imura, Fumito; Kuroiwa, Hiroyuki; Nakada, Akira; Kosaka, Kouji; Kubota, Hiroshi

    2007-11-01

    The technology of the sub-femtoliter volume control of liquids in nanometer range pipettes (nanopipettes) has been developed for carrying out surgical operations on living cells. We focus attention on an interface forming between oil and water in a nanopipette. The interface position can be moved by increasing or decreasing the input pressure. If the volume of liquid in the nanopipette can be controlled by moving the position of the interface, cell organelles can be discharged or suctioned and a drug-solution can be injected into the cell. Quantity volume control in the pico-attoliter range using a tapered nanopipette is controlled by the condition of an interface with a convex shape toward the top of the nanopipette. The volume can be controlled by the input pressure corresponding to the interfacial radius without the use of a microscope by preliminarily preparing the pipette shape and the interface radius as a function of the input pressure.

  13. Generalized Steering Robustness of Bipartite Quantum States

    NASA Astrophysics Data System (ADS)

    Zheng, Chunming; Guo, Zhihua; Cao, Huaixin

    2018-06-01

    EPR steering is a kind of quantum correlation that is intermediate between entanglement and Bell nonlocality. In this paper, by recalling the definitions of unsteerability and steerability, some properties of them are given, e.g, it is proved that a local quantum channel transforms every unsteerable state into an unsteerable state. Second, a way of quantifying quantum steering, which we called the generalized steering robustness (GSR), is introduced and some interesting properties are established, including: (1) GSR of a state vanishes if and only if the state is unsteerable; (2) a local quantum channel does not increase GSR of any state; (3) GSR is invariant under each local unitary operation; (4) as a function on the state space, GSR is convex and lower-semi continuous. Lastly, by using the majorization between the reduced states of two pure states, GSR of the two pure states are compared, and it is proved that every maximally entangled state has the maximal GSR.

  14. Preconditioned alternating direction method of multipliers for inverse problems with constraints

    NASA Astrophysics Data System (ADS)

    Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie

    2017-02-01

    We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.

  15. An exact general remeshing scheme applied to physically conservative voxelization

    DOE PAGES

    Powell, Devon; Abel, Tom

    2015-05-21

    We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themore » corresponding integral over the output mesh. We refer to this as “physically conservative voxelization.” At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.« less

  16. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  17. Jensen-Bregman LogDet Divergence for Efficient Similarity Computations on Positive Definite Tensors

    DTIC Science & Technology

    2012-05-02

    function of Legendre-type on int(domS) [29]. From (7) the following properties of dφ(x, y) are apparent: strict convexity in x; asym- metry; non ...tensor imaging. An important task in all of these applications is to compute the distance between covariance matrices using a (dis)similarity function ...important task in all of these applications is to compute the distance between covariance matrices using a (dis)similarity function , for which the natural

  18. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  19. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE PAGES

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    2016-12-07

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  20. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  1. Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis

    PubMed Central

    Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong

    2012-01-01

    Standing posterior–anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI–ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm3 and 256.9 ± 42.6 cm3 at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI–ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. PMID:22133294

  2. Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis.

    PubMed

    Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong

    2012-02-01

    Standing posterior-anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI-ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm(3) and 256.9 ± 42.6 cm(3) at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI-ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society.

  3. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  4. Congruency effects in dot comparison tasks: convex hull is more important than dot area.

    PubMed

    Gilmore, Camilla; Cragg, Lucy; Hogan, Grace; Inglis, Matthew

    2016-11-16

    The dot comparison task, in which participants select the more numerous of two dot arrays, has become the predominant method of assessing Approximate Number System (ANS) acuity. Creation of the dot arrays requires the manipulation of visual characteristics, such as dot size and convex hull. For the task to provide a valid measure of ANS acuity, participants must ignore these characteristics and respond on the basis of number. Here, we report two experiments that explore the influence of dot area and convex hull on participants' accuracy on dot comparison tasks. We found that individuals' ability to ignore dot area information increases with age and display time. However, the influence of convex hull information remains stable across development and with additional time. This suggests that convex hull information is more difficult to inhibit when making judgements about numerosity and therefore it is crucial to control this when creating dot comparison tasks.

  5. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.

  6. General and mechanistic optimal relationships for tensile strength of doubly convex tablets under diametrical compression.

    PubMed

    Razavi, Sonia M; Gonzalez, Marcial; Cuitiño, Alberto M

    2015-04-30

    We propose a general framework for determining optimal relationships for tensile strength of doubly convex tablets under diametrical compression. This approach is based on the observation that tensile strength is directly proportional to the breaking force and inversely proportional to a non-linear function of geometric parameters and materials properties. This generalization reduces to the analytical expression commonly used for flat faced tablets, i.e., Hertz solution, and to the empirical relationship currently used in the pharmaceutical industry for convex-faced tablets, i.e., Pitt's equation. Under proper parametrization, optimal tensile strength relationship can be determined from experimental results by minimizing a figure of merit of choice. This optimization is performed under the first-order approximation that a flat faced tablet and a doubly curved tablet have the same tensile strength if they have the same relative density and are made of the same powder, under equivalent manufacturing conditions. Furthermore, we provide a set of recommendations and best practices for assessing the performance of optimal tensile strength relationships in general. Based on these guidelines, we identify two new models, namely the general and mechanistic models, which are effective and predictive alternatives to the tensile strength relationship currently used in the pharmaceutical industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Adaptive evolution of defense ability leads to diversification of prey species.

    PubMed

    Zu, Jian; Wang, Jinliang; Du, Jianqiang

    2014-06-01

    In this paper, by using the adaptive dynamics approach, we investigate how the adaptive evolution of defense ability promotes the diversity of prey species in an initial one-prey-two-predator community. We assume that the prey species can evolve to a safer strategy such that it can reduce the predation risk, but a prey with a high defense ability for one predator may have a low defense ability for the other and vice versa. First, by using the method of critical function analysis, we find that if the trade-off is convex in the vicinity of the evolutionarily singular strategy, then this singular strategy is a continuously stable strategy. However, if the trade-off is weakly concave near the singular strategy and the competition between the two predators is relatively weak, then the singular strategy may be an evolutionary branching point. Second, we find that after the branching has occurred in the prey strategy, if the trade-off curve is globally concave, then the prey species might eventually evolve into two specialists, each caught by only one predator species. However, if the trade-off curve is convex-concave-convex, the prey species might eventually branch into two partial specialists, each being caught by both of the two predators and they can stably coexist on the much longer evolutionary timescale.

  8. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    DOE PAGES

    Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna

    2016-09-13

    Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less

  9. Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters

    NASA Astrophysics Data System (ADS)

    Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.

    Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.

  10. Tumor segmentation of multi-echo MR T2-weighted images with morphological operators

    NASA Astrophysics Data System (ADS)

    Torres, W.; Martín-Landrove, M.; Paluszny, M.; Figueroa, G.; Padilla, G.

    2009-02-01

    In the present work an automatic brain tumor segmentation procedure based on mathematical morphology is proposed. The approach considers sequences of eight multi-echo MR T2-weighted images. The relaxation time T2 characterizes the relaxation of water protons in the brain tissue: white matter, gray matter, cerebrospinal fluid (CSF) or pathological tissue. Image data is initially regularized by the application of a log-convex filter in order to adjust its geometrical properties to those of noiseless data, which exhibits monotonously decreasing convex behavior. Finally the regularized data is analyzed by means of an 8-dimensional morphological eccentricity filter. In a first stage, the filter was used for the spatial homogenization of the tissues in the image, replacing each pixel by the most representative pixel within its structuring element, i.e. the one which exhibits the minimum total distance to all members in the structuring element. On the filtered images, the relaxation time T2 is estimated by means of least square regression algorithm and the histogram of T2 is determined. The T2 histogram was partitioned using the watershed morphological operator; relaxation time classes were established and used for tissue classification and segmentation of the image. The method was validated on 15 sets of MRI data with excellent results.

  11. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    PubMed

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Natural-Scene Statistics Predict How the Figure–Ground Cue of Convexity Affects Human Depth Perception

    PubMed Central

    Fowlkes, Charless C.; Banks, Martin S.

    2010-01-01

    The shape of the contour separating two regions strongly influences judgments of which region is “figure” and which is “ground.” Convexity and other figure–ground cues are generally assumed to indicate only which region is nearer, but nothing about how much the regions are separated in depth. To determine the depth information conveyed by convexity, we examined natural scenes and found that depth steps across surfaces with convex silhouettes are likely to be larger than steps across surfaces with concave silhouettes. In a psychophysical experiment, we found that humans exploit this correlation. For a given binocular disparity, observers perceived more depth when the near surface's silhouette was convex rather than concave. We estimated the depth distributions observers used in making those judgments: they were similar to the natural-scene distributions. Our findings show that convexity should be reclassified as a metric depth cue. They also suggest that the dichotomy between metric and nonmetric depth cues is false and that the depth information provided many cues should be evaluated with respect to natural-scene statistics. Finally, the findings provide an explanation for why figure–ground cues modulate the responses of disparity-sensitive cells in visual cortex. PMID:20505093

  13. Endobronchial ultrasound elastography: a new method in endobronchial ultrasound-guided transbronchial needle aspiration.

    PubMed

    Jiang, Jun-Hong; Turner, J Francis; Huang, Jian-An

    2015-12-01

    TBNA through the flexible bronchoscope is a 37-year-old technology that utilizes a TBNA needle to puncture the bronchial wall and obtain specimens of peribronchial and mediastinal lesions through the flexible bronchoscope for the diagnosis of benign and malignant diseases in the mediastinum and lung. Since 2002, the Olympus Company developed the first generation ultrasound equipment for use in the airway, initially utilizing an ultrasound probe introduced through the working channel followed by incoroporation of a fixed linear ultrasound array at the distal tip of the bronchoscope. This new bronchoscope equipped with a convex type ultrasound probe on the tip was subsequently introduced into clinical practice. The convex probe (CP)-EBUS allows real-time endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) of mediastinal and hilar lymph nodes. EBUS-TBNA is a minimally invasive procedure performed under local anesthesia that has been shown to have a high sensitivity and diagnostic yield for lymph node staging of lung cancer. In 10 years of EBUS development, the Olympus Company developed the second generation EBUS bronchoscope (BF-UC260FW) with the ultrasound image processor (EU-M1), and in 2013 introduced a new ultrasound image processor (EU-M2) into clinical practice. FUJI company has also developed a curvilinear array endobronchial ultrasound bronchoscope (EB-530 US) that makes it easier for the operator to master the operation of the ultrasonic bronchoscope. Also, the new thin convex probe endobronchial ultrasound bronchoscope (TCP-EBUS) is able to visualize one to three bifurcations distal to the current CP-EBUS. The emergence of EBUS-TBNA has also been accompanied by innovation in EBUS instruments. EBUS elastography is, then, a new technique for describing the compliance of structures during EBUS, which may be of use in the determination of metastasis to the mediastinal and hilar lymph nodes. This article describes these new EBUS techniques and reviews the relevant literature.

  14. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints.

    PubMed

    Liang, X B; Wang, J

    2000-01-01

    This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.

  15. GPC: General Polygon Clipper library

    NASA Astrophysics Data System (ADS)

    Murta, Alan

    2015-12-01

    The University of Manchester GPC library is a flexible and highly robust polygon set operations library for use with C, C#, Delphi, Java, Perl, Python, Haskell, Lua, VB.Net and other applications. It supports difference, intersection, exclusive-or and union clip operations, and polygons may be comprised of multiple disjoint contours. Contour vertices may be given in any order - clockwise or anticlockwise, and contours may be convex, concave or self-intersecting, and may be nested (i.e. polygons may have holes). Output may take the form of either polygon contours or tristrips, and hole and external contours are differentiated in the result.

  16. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  17. Asymmetric Bulkheads for Cylindrical Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Ford, Donald B.

    2007-01-01

    Asymmetric bulkheads are proposed for the ends of vertically oriented cylindrical pressure vessels. These bulkheads, which would feature both convex and concave contours, would offer advantages over purely convex, purely concave, and flat bulkheads (see figure). Intended originally to be applied to large tanks that hold propellant liquids for launching spacecraft, the asymmetric-bulkhead concept may also be attractive for terrestrial pressure vessels for which there are requirements to maximize volumetric and mass efficiencies. A description of the relative advantages and disadvantages of prior symmetric bulkhead configurations is prerequisite to understanding the advantages of the proposed asymmetric configuration: In order to obtain adequate strength, flat bulkheads must be made thicker, relative to concave and convex bulkheads; the difference in thickness is such that, other things being equal, pressure vessels with flat bulkheads must be made heavier than ones with concave or convex bulkheads. Convex bulkhead designs increase overall tank lengths, thereby necessitating additional supporting structure for keeping tanks vertical. Concave bulkhead configurations increase tank lengths and detract from volumetric efficiency, even though they do not necessitate additional supporting structure. The shape of a bulkhead affects the proportion of residual fluid in a tank that is, the portion of fluid that unavoidably remains in the tank during outflow and hence cannot be used. In this regard, a flat bulkhead is disadvantageous in two respects: (1) It lacks a single low point for optimum placement of an outlet and (2) a vortex that forms at the outlet during outflow prevents a relatively large amount of fluid from leaving the tank. A concave bulkhead also lacks a single low point for optimum placement of an outlet. Like purely concave and purely convex bulkhead configurations, the proposed asymmetric bulkhead configurations would be more mass-efficient than is the flat bulkhead configuration. In comparison with both purely convex and purely concave configurations, the proposed asymmetric configurations would offer greater volumetric efficiency. Relative to a purely convex bulkhead configuration, the corresponding asymmetric configuration would result in a shorter tank, thus demanding less supporting structure. An asymmetric configuration provides a low point for optimum location of a drain, and the convex shape at the drain location minimizes the amount of residual fluid.

  18. Magnetic-Field Density-Functional Theory (BDFT): Lessons from the Adiabatic Connection.

    PubMed

    Reimann, Sarah; Borgoo, Alex; Tellgren, Erik I; Teale, Andrew M; Helgaker, Trygve

    2017-09-12

    We study the effects of magnetic fields in the context of magnetic field density-functional theory (BDFT), where the energy is a functional of the electron density ρ and the magnetic field B. We show that this approach is a worthwhile alternative to current-density functional theory (CDFT) and may provide a viable route to the study of many magnetic phenomena using density-functional theory (DFT). The relationship between BDFT and CDFT is developed and clarified within the framework of the four-way correspondence of saddle functions and their convex and concave parents in convex analysis. By decomposing the energy into its Kohn-Sham components, we demonstrate that the magnetizability is mainly determined by those energy components that are related to the density. For existing density functional approximations, this implies that, for the magnetizability, improvements of the density will be more beneficial than introducing a magnetic-field dependence in the correlation functional. However, once a good charge density is achieved, we show that high accuracy is likely only obtainable by including magnetic-field dependence. We demonstrate that adiabatic-connection (AC) curves at different field strengths resemble one another closely provided each curve is calculated at the equilibrium geometry of that field strength. In contrast, if all AC curves are calculated at the equilibrium geometry of the field-free system, then the curves change strongly with increasing field strength due to the increasing importance of static correlation. This holds also for density functional approximations, for which we demonstrate that the main error encountered in the presence of a field is already present at zero field strength, indicating that density-functional approximations may be applied to systems in strong fields, without the need to treat additional static correlation.

  19. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  1. The roles of the convex hull and the number of potential intersections in performance on visually presented traveling salesperson problems.

    PubMed

    Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter

    2003-10-01

    The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.

  2. On Using Homogeneous Polynomials To Design Anisotropic Yield Functions With Tension/Compression Symmetry/Assymetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soare, S.; Cazacu, O.; Yoon, J. W.

    With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stressmore » states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.« less

  3. CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.

    PubMed

    Mei, Gang

    2016-01-01

    This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.

  4. Federal Motor Carrier Safety Administration’s advanced system testing utilizing a data acquisition system on the highways (FAST DASH) safety technology evaluation project #3 : novel convex mirrors.

    DOT National Transportation Integrated Search

    2016-11-01

    An independent evaluation of a set of novel prototype mirrors was conducted to determine whether the mirrors perform as well as traditional production mirrors across the basic functions of field of view (FOV), image distortion, and distance estimatio...

  5. A parallel Discrete Element Method to model collisions between non-convex particles

    NASA Astrophysics Data System (ADS)

    Rakotonirina, Andriarimina Daniel; Delenne, Jean-Yves; Wachs, Anthony

    2017-06-01

    In many dry granular and suspension flow configurations, particles can be highly non-spherical. It is now well established in the literature that particle shape affects the flow dynamics or the microstructure of the particles assembly in assorted ways as e.g. compacity of packed bed or heap, dilation under shear, resistance to shear, momentum transfer between translational and angular motions, ability to form arches and block the flow. In this talk, we suggest an accurate and efficient way to model collisions between particles of (almost) arbitrary shape. For that purpose, we develop a Discrete Element Method (DEM) combined with a soft particle contact model. The collision detection algorithm handles contacts between bodies of various shape and size. For nonconvex bodies, our strategy is based on decomposing a non-convex body into a set of convex ones. Therefore, our novel method can be called "glued-convex method" (in the sense clumping convex bodies together), as an extension of the popular "glued-spheres" method, and is implemented in our own granular dynamics code Grains3D. Since the whole problem is solved explicitly, our fully-MPI parallelized code Grains3D exhibits a very high scalability when dynamic load balancing is not required. In particular, simulations on up to a few thousands cores in configurations involving up to a few tens of millions of particles can readily be performed. We apply our enhanced numerical model to (i) the collapse of a granular column made of convex particles and (i) the microstructure of a heap of non-convex particles in a cylindrical reactor.

  6. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  7. Enhanced robust finite-time passivity for Markovian jumping discrete-time BAM neural networks with leakage delay.

    PubMed

    Sowmiya, C; Raja, R; Cao, Jinde; Rajchakit, G; Alsaedi, Ahmed

    2017-01-01

    This paper is concerned with the problem of enhanced results on robust finite-time passivity for uncertain discrete-time Markovian jumping BAM delayed neural networks with leakage delay. By implementing a proper Lyapunov-Krasovskii functional candidate, the reciprocally convex combination method together with linear matrix inequality technique, several sufficient conditions are derived for varying the passivity of discrete-time BAM neural networks. An important feature presented in our paper is that we utilize the reciprocally convex combination lemma in the main section and the relevance of that lemma arises from the derivation of stability by using Jensen's inequality. Further, the zero inequalities help to propose the sufficient conditions for finite-time boundedness and passivity for uncertainties. Finally, the enhancement of the feasible region of the proposed criteria is shown via numerical examples with simulation to illustrate the applicability and usefulness of the proposed method.

  8. Efficient isoparametric integration over arbitrary space-filling Voronoi polyhedra for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Aftab; Khan, S. N.; Wilson, Brian G.

    2011-07-06

    A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less

  9. Existence of evolutionary variational solutions via the calculus of variations

    NASA Astrophysics Data System (ADS)

    Bögelein, Verena; Duzaar, Frank; Marcellini, Paolo

    In this paper we introduce a purely variational approach to time dependent problems, yielding the existence of global parabolic minimizers, that is ∫0T ∫Ω [uṡ∂tφ+f(x,Du)] dx dt⩽∫0T ∫Ω f(x,Du+Dφ) dx dt, whenever T>0 and φ∈C0∞(Ω×(0,T),RN). For the integrand f:Ω×R→[0,∞] we merely assume convexity with respect to the gradient variable and coercivity. These evolutionary variational solutions are obtained as limits of maps depending on space and time minimizing certain convex variational functionals. In the simplest situation, with some growth conditions on f, the method provides the existence of global weak solutions to Cauchy-Dirichlet problems of parabolic systems of the type ∂tu-divDξf(x,Du)=0 in Ω×(0,∞).

  10. Thick lens chromatic effective focal length variation versus bending

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott

    2017-11-01

    Longitudinal chromatic aberration (LCA) can limit the optical performance in refractive optical systems. Understanding a singlet's chromatic change of effective focal leads to insights and methods to control LCA. Long established, first order theory, shows the chromatic change in focal length for a zero thickness lens is proportional to it's focal length divided by the lens V number or inverse dispersion. This work presents the derivation of an equation for a thick singlet's chromatic change in effective focal length as a function of center thickness, t, dispersion, V, index of refraction, n, and the Coddington shape factor, K. A plot of bending versus chromatic focal length variation is presented. Lens thickness does not influence chromatic variation of effective focal length for a convex plano or plano convex lens. A lens's center thickness'influence on chromatic focal length variation is more pronounced for lower indices of refraction.

  11. Prospect theory in the health domain: a quantitative assessment.

    PubMed

    Attema, Arthur E; Brouwer, Werner B F; I'Haridon, Olivier

    2013-12-01

    It is well-known that expected utility (EU) has empirical deficiencies. Cumulative prospect theory (CPT) has developed as an alternative with more descriptive validity. However, CPT's full function had not yet been quantified in the health domain. This paper is therefore the first to simultaneously measure utility of life duration, probability weighting, and loss aversion in this domain. We observe loss aversion and risk aversion for gains and losses, which for gains can be explained by probabilistic pessimism. Utility for gains is almost linear. For losses, we find less weighting of probability 1/2 and concave utility. This contrasts with the common finding of convex utility for monetary losses. However, CPT was proposed to explain choices among lotteries involving monetary outcomes. Life years are arguably very different from monetary outcomes and need not generate convex utility for losses. Moreover, utility of life duration reflects discounting, causing concave utility. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Method and system using power modulation and velocity modulation producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA

    2003-12-23

    A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  13. Holographic reconstruction of AdS exchanges from crossing symmetry

    DOE PAGES

    Alday, Luis F.; Bissi, Agnese; Perlmutter, Eric

    2017-08-31

    Motivated by AdS/CFT, we address the following outstanding question in large N conformal field theory: given the appearance of a single-trace operator in the O x O OPE of a scalar primary O, what is its total contribution to the vacuum four-point function (OOOO) as dictated by crossing symmetry? We solve this problem in 4d conformal field theories at leading order in 1/N. Viewed holographically, this provides a field theory reconstruction of crossing-symmetric, four-point exchange amplitudes in AdS 5. Our solution takes the form of a resummation of the large spin solution to the crossing equations, supplemented by corrections atmore » finite spin, required by crossing. The method can be applied to the exchange of operators of arbitrary twist τ and spin s, although it vastly simplifies for even-integer twist, where we give explicit results. The output is the set of OPE data for the exchange of all double-trace operators [OO] n,ℓ. We find that the double-trace anomalous dimensions γ n,ℓ are negative, monotonic and convex functions of ℓ, for all n and all ℓ > s. This constitutes a holographic signature of bulk causality and classical dynamics of even-spin fields. We also find that the “derivative relation” between double-trace anomalous dimensions and OPE coefficients does not hold in general, and derive the explicit form of the deviation in several cases. Finally, we study large n limits of γ n,ℓ, relevant for the Regge and bulk-point regimes.« less

  14. Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization

    NASA Astrophysics Data System (ADS)

    Kolosnitsyn, A. V.

    2018-02-01

    The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.

  15. Thermal Protection System with Staggered Joints

    NASA Technical Reports Server (NTRS)

    Simon, Xavier D. (Inventor); Robinson, Michael J. (Inventor); Andrews, Thomas L. (Inventor)

    2014-01-01

    The thermal protection system disclosed herein is suitable for use with a spacecraft such as a reentry module or vehicle, where the spacecraft has a convex surface to be protected. An embodiment of the thermal protection system includes a plurality of heat resistant panels, each having an outer surface configured for exposure to atmosphere, an inner surface opposite the outer surface and configured for attachment to the convex surface of the spacecraft, and a joint edge defined between the outer surface and the inner surface. The joint edges of adjacent ones of the heat resistant panels are configured to mate with each other to form staggered joints that run between the peak of the convex surface and the base section of the convex surface.

  16. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  17. The concave cusp as a determiner of figure-ground.

    PubMed

    Stevens, K A; Brookes, A

    1988-01-01

    The tendency to interpret as figure, relative to background, those regions that are lighter, smaller, and, especially, more convex is well known. Wherever convex opaque objects abut or partially occlude one another in an image, the points of contact between the silhouettes form concave cusps, each indicating the local assignment of figure versus ground across the contour segments. It is proposed that this local geometric feature is a preattentive determiner of figure-ground perception and that it contributes to the previously observed tendency for convexity preference. Evidence is presented that figure-ground assignment can be determined solely on the basis of the concave cusp feature, and that the salience of the cusp derives from local geometry and not from adjacent contour convexity.

  18. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  19. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  20. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  1. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  2. Polar DuaLs of Convex Bodies

    DTIC Science & Technology

    1990-01-01

    Verlag 1976. 17. C. G. Lekkerkerker, Geometry of Numbers, Wolters-Noordhoff, Groningen, 1969. 18. E . Lutwak , "Dual Mixed Volumes," Pacific Journal of...Mathematics, Vol. 58, No. 2, 1975. 19. E . Lutwak , "On Cross-Sectional Measures of Polar Reciprocal Convex Bodies," Geometriae Dedicata 5, (1976) 79-80...20. E . Lutwak , "Blaschke-Santal6 Inequality, Discrete Geometry and Convexity," Annals of the New York Academy of Sciences 440 (1985) pp 106-112. 21. V

  3. Single lens laser beam shaper

    DOEpatents

    Liu, Chuyu [Newport News, VA; Zhang, Shukui [Yorktown, VA

    2011-10-04

    A single lens bullet-shaped laser beam shaper capable of redistributing an arbitrary beam profile into any desired output profile comprising a unitary lens comprising: a convex front input surface defining a focal point and a flat output portion at the focal point; and b) a cylindrical core portion having a flat input surface coincident with the flat output portion of the first input portion at the focal point and a convex rear output surface remote from the convex front input surface.

  4. ON THE STRUCTURE OF \\mathcal{H}_{n - 1}-ALMOST EVERYWHERE CONVEX HYPERSURFACES IN \\mathbf{R}^{n + 1}

    NASA Astrophysics Data System (ADS)

    Dmitriev, V. G.

    1982-04-01

    It is proved that a hypersurface f imbedded in \\mathbf{R}^{n + 1}, n \\geq 2, which is locally convex at all points except for a closed set E with (n - 1)-dimensional Hausdorff measure \\mathcal{K}_{n - 1}(E) = 0, and strictly convex near E is in fact locally convex everywhere. The author also gives various corollaries. In particular, let M be a complete two-dimensional Riemannian manifold of nonnegative curvature K and E \\subset M a closed subset for which \\mathcal{K}_1(E) = 0. Assume further that there exists a neighborhood U \\supset E such that K(x) > 0 for x \\in U \\setminus E, f \\colon M \\to \\mathbf{R}^3 is such that f\\big\\vert _{U \\setminus E} is an imbedding, and f\\big\\vert _{M \\setminus E} \\in C^{1, \\alpha}, \\alpha > 2/3. Then f(M) is a complete convex surface in \\mathbf{R}^3. This result is an generalization of results in the paper reviewed in MR 51 # 11374.Bibliography: 19 titles.

  5. Turbulent boundary layers subjected to multiple curvatures and pressure gradients

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Promode R.; Ahmed, Anwar

    1993-01-01

    The effects of abruptly applied cycles of curvatures and pressure gradients on turbulent boundary layers are examined experimentally. Two two-dimensional curved test surfaces are considered: one has a sequence of concave and convex longitudinal surface curvatures and the other has a sequence of convex and concave curvatures. The choice of the curvature sequences were motivated by a desire to study the asymmetric response of turbulent boundary layers to convex and concave curvatures. The relaxation of a boundary layer from the effects of these two opposite sequences has been compared. The effect of the accompaying sequences of pressure gradient has also been examined but the effect of curvature dominates. The growth of internal layers at the curvature junctions have been studied. Measurements of the Gortler and corner vortex systems have been made. The boundary layer recovering from the sequence of concave to convex curvature has a sustained lower skin friction level than in that recovering from the sequence of convex to concave curvature. The amplification and suppression of turbulence due to the curvature sequences have also been studied.

  6. A Fuzzy Approach of the Competition on the Air Transport Market

    NASA Technical Reports Server (NTRS)

    Charfeddine, Souhir; DeColigny, Marc; Camino, Felix Mora; Cosenza, Carlos Alberto Nunes

    2003-01-01

    The aim of this communication is to study with a new scope the conditions of the equilibrium in an air transport market where two competitive airlines are operating. Each airline is supposed to adopt a strategy maximizing its profit while its estimation of the demand has a fuzzy nature. This leads each company to optimize a program of its proposed services (frequency of the flights and ticket prices) characterized by some fuzzy parameters. The case of monopoly is being taken as a benchmark. Classical convex optimization can be used to solve this decision problem. This approach provides the airline with a new decision tool where uncertainty can be taken into account explicitly. The confrontation of the strategies of the companies, in the ease of duopoly, leads to the definition of a fuzzy equilibrium. This concept of fuzzy equilibrium is more general and can be applied to several other domains. The formulation of the optimization problem and the methodological consideration adopted for its resolution are presented in their general theoretical aspect. In the case of air transportation, where the conditions of management of operations are critical, this approach should offer to the manager elements needed to the consolidation of its decisions depending on the circumstances (ordinary, exceptional events,..) and to be prepared to face all possibilities. Keywords: air transportation, competition equilibrium, convex optimization , fuzzy modeling,

  7. Efficient methods for overlapping group lasso.

    PubMed

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  8. Some Geometric Inequalities Relating to an Interior Point in Triangle

    ERIC Educational Resources Information Center

    Wu, Yu-Dong; Zhang, Zhi-Hua; Liang, Chun-Lei

    2010-01-01

    In this short note, by using one of Li and Liu's theorems [K.-H. Li, "The solution of CIQ. 39," "Commun. Stud. Inequal." 11(1) (2004), p. 162 (in Chinese)], "s-R-r" method, Cauchy's inequality and the theory of convex function, we solve some geometric inequalities conjectures relating to an interior point in triangle. (Contains 1 figure.)

  9. Random density matrices versus random evolution of open system

    NASA Astrophysics Data System (ADS)

    Pineda, Carlos; Seligman, Thomas H.

    2015-10-01

    We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.

  10. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  11. Metric adjusted skew information

    PubMed Central

    Hansen, Frank

    2008-01-01

    We extend the concept of Wigner–Yanase–Dyson skew information to something we call “metric adjusted skew information” (of a state with respect to a conserved observable). This “skew information” is intended to be a non-negative quantity bounded by the variance (of an observable in a state) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova–Chentsov functions describing the possible quantum statistics is a Bauer simplex and determine its extreme points. We determine a particularly simple skew information, the “λ-skew information,” parametrized by a λ ∈ (0, 1], and show that the convex cone this family generates coincides with the set of all metric adjusted skew informations. PMID:18635683

  12. The equation of state of Song and Mason applied to fluorine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslami, H.; Boushehri, A.

    1999-03-01

    An analytical equation of state is applied to calculate the compressed and saturation thermodynamic properties of fluorine. The equation of state is that of Song and Mason. It is based on a statistical mechanical perturbation theory of hard convex bodies and is a fifth-order polynomial in the density. There exist three temperature-dependent parameters: the second virial coefficient, an effective molecular volume, and a scaling factor for the average contact pair distribution function of hard convex bodies. The temperature-dependent parameters can be calculated if the intermolecular pair potential is known. However, the equation is usable with much less input than themore » full intermolecular potential, since the scaling factor and effective volume are nearly universal functions when expressed in suitable reduced units. The equation of state has been applied to calculate thermodynamic parameters including the critical constants, the vapor pressure curve, the compressibility factor, the fugacity coefficient, the enthalpy, the entropy, the heat capacity at constant pressure, the ratio of heat capacities, the Joule-Thomson coefficient, the Joule-Thomson inversion curve, and the speed of sound for fluorine. The agreement with experiment is good.« less

  13. Non-convex dissipation potentials in multiscale non-equilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Janečka, Adam; Pavelka, Michal

    2018-04-01

    Reformulating constitutive relation in terms of gradient dynamics (being derivative of a dissipation potential) brings additional information on stability, metastability and instability of the dynamics with respect to perturbations of the constitutive relation, called CR-stability. CR-instability is connected to the loss of convexity of the dissipation potential, which makes the Legendre-conjugate dissipation potential multivalued and causes dissipative phase transitions that are not induced by non-convexity of free energy, but by non-convexity of the dissipation potential. CR-stability of the constitutive relation with respect to perturbations is then manifested by constructing evolution equations for the perturbations in a thermodynamically sound way (CR-extension). As a result, interesting experimental observations of behavior of complex fluids under shear flow and supercritical boiling curve can be explained.

  14. Modified surface testing method for large convex aspheric surfaces based on diffraction optics.

    PubMed

    Zhang, Haidong; Wang, Xiaokun; Xue, Donglin; Zhang, Xuejun

    2017-12-01

    Large convex aspheric optical elements have been widely applied in advanced optical systems, which have presented a challenging metrology problem. Conventional testing methods cannot satisfy the demand gradually with the change of definition of "large." A modified method is proposed in this paper, which utilizes a relatively small computer-generated hologram and an illumination lens with certain feasibility to measure the large convex aspherics. Two example systems are designed to demonstrate the applicability, and also, the sensitivity of this configuration is analyzed, which proves the accuracy of the configuration can be better than 6 nm with careful alignment and calibration of the illumination lens in advance. Design examples and analysis show that this configuration is applicable to measure the large convex aspheric surfaces.

  15. Influence of implant rod curvature on sagittal correction of scoliosis deformity.

    PubMed

    Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu

    2014-08-01

    Deformation of in vivo-implanted rods could alter the scoliosis sagittal correction. To our knowledge, no previous authors have investigated the influence of implanted-rod deformation on the sagittal deformity correction during scoliosis surgery. To analyze the changes of the implant rod's angle of curvature during surgery and establish its influence on sagittal correction of scoliosis deformity. A retrospective analysis of the preoperative and postoperative implant rod geometry and angle of curvature was conducted. Twenty adolescent idiopathic scoliosis patients underwent surgery. Average age at the time of operation was 14 years. The preoperative and postoperative implant rod angle of curvature expressed in degrees was obtained for each patient. Two implant rods were attached to the concave and convex side of the spinal deformity. The preoperative implant rod geometry was measured before surgical implantation. The postoperative implant rod geometry after surgery was measured by computed tomography. The implant rod angle of curvature at the sagittal plane was obtained from the implant rod geometry. The angle of curvature between the implant rod extreme ends was measured before implantation and after surgery. The sagittal curvature between the corresponding spinal levels of healthy adolescents obtained by previous studies was compared with the implant rod angle of curvature to evaluate the sagittal curve correction. The difference between the postoperative implant rod angle of curvature and normal spine sagittal curvature of the corresponding instrumented level was used to evaluate over or under correction of the sagittal deformity. The implant rods at the concave side of deformity of all patients were significantly deformed after surgery. The average degree of rod deformation Δθ at the concave and convex sides was 15.8° and 1.6°, respectively. The average preoperative and postoperative implant rod angle of curvature at the concave side was 33.6° and 17.8°, respectively. The average preoperative and postoperative implant rod angle of curvature at the convex side was 25.5° and 23.9°, respectively. A significant relationship was found between the degree of rod deformation and preoperative implant rod angle of curvature (r=0.60, p<.005). The implant rods at the convex side of all patients did not have significant deformation. The results indicate that the postoperative sagittal outcome could be predicted from the initial rod shape. Changes in implant rod angle of curvature may lead to over- or undercorrection of the sagittal curve. Rod deformation at the concave side suggests that corrective forces acting on that side are greater than the convex side. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  17. Relationship between spontaneous expiratory flow-volume curve pattern and air-flow obstruction in elderly COPD patients.

    PubMed

    Nozoe, Masafumi; Mase, Kyoshi; Murakami, Shigefumi; Okada, Makoto; Ogino, Tomoyuki; Matsushita, Kazuhiro; Takashima, Sachie; Yamamoto, Noriyasu; Fukuda, Yoshihiro; Domen, Kazuhisa

    2013-10-01

    Assessment of the degree of air-flow obstruction is important for determining the treatment strategy in COPD patients. However, in some elderly COPD patients, measuring FVC is impossible because of cognitive dysfunction or severe dyspnea. In such patients a simple test of airways obstruction requiring only a short run of tidal breathing would be useful. We studied whether the spontaneous expiratory flow-volume (SEFV) curve pattern reflects the degree of air-flow obstruction in elderly COPD patients. In 34 elderly subjects (mean ± SD age 80 ± 7 y) with stable COPD (percent-of-predicted FEV(1) 39.0 ± 18.5%), and 12 age-matched healthy subjects, we measured FVC and recorded flow-volume curves during quiet breathing. We studied the SEFV curve patterns (concavity/convexity), spirometry results, breathing patterns, and demographics. The SEFV curve concavity/convexity prediction accuracy was examined by calculating the receiver operating characteristic curves, cutoff values, area under the curve, sensitivity, and specificity. Fourteen subjects with COPD had a concave SEFV curve. All the healthy subjects had convex SEFV curves. The COPD subjects who had concave SEFV curves often had very severe airway obstruction. The percent-of-predicted FEV(1)% (32.4%) was the most powerful SEFV curve concavity predictor (area under the curve 0.92, 95% CI 0.83-1.00), and had the highest sensitivity (0.93) and specificity (0.88). Concavity of the SEFV curve obtained during tidal breathing may be a useful test for determining the presence of very severe obstruction in elderly patients unable to perform a satisfactory FVC maneuver.

  18. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  19. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  20. A Perron-Frobenius Type of Theorem for Quantum Operations

    NASA Astrophysics Data System (ADS)

    Lagro, Matthew; Yang, Wei-Shih; Xiong, Sheng

    2017-10-01

    We define a special class of quantum operations we call Markovian and show that it has the same spectral properties as a corresponding Markov chain. We then consider a convex combination of a quantum operation and a Markovian quantum operation and show that under a norm condition its spectrum has the same properties as in the conclusion of the Perron-Frobenius theorem if its Markovian part does. Moreover, under a compatibility condition of the two operations, we show that its limiting distribution is the same as the corresponding Markov chain. We apply our general results to partially decoherent quantum random walks with decoherence strength 0 ≤ p ≤ 1. We obtain a quantum ergodic theorem for partially decoherent processes. We show that for 0 < p ≤ 1, the limiting distribution of a partially decoherent quantum random walk is the same as the limiting distribution for the classical random walk.

  1. Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line

    NASA Astrophysics Data System (ADS)

    Timings, Julian P.; Cole, David J.

    2012-06-01

    A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.

  2. Passive autonomous infrared sensor technology

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz

    1987-10-01

    This study was conducted in response to the DoD's need for establishing understanding of algorithm's modules for passive infrared sensors and seekers and establishing a standardized systematic procedure for applying this understanding to DoD applications. We quantified the performances of Honeywell's Background Adaptive Convexity Operator Region Extractor (BACORE) detection and segmentation modules, as functions of a set of image metrics for both single-frame and multiframe processing. We established an understanding of the behavior of the BACORE's internal parameters. We characterized several sets of stationary and sequential imagery and extracted TIR squared, TBIR squared, ESR, and range for each target. We generated a set of performance models for multi-frame processing BACORE that could be used to predict the behavior of BACORE in image metric space. A similar study was conducted for another of Honeywell's segmentors, namely Texture Boundary Locator (TBL), and its performances were quantified. Finally, a comparison of TBL and BACORE on the same data base and same number of frames was made.

  3. Long-range depth profiling of camouflaged targets using single-photon detection

    NASA Astrophysics Data System (ADS)

    Tobin, Rachael; Halimi, Abderrahim; McCarthy, Aongus; Ren, Ximing; McEwan, Kenneth J.; McLaughlin, Stephen; Buller, Gerald S.

    2018-03-01

    We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.

  4. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  5. Phase diagram of two-dimensional hard rods from fundamental mixed measure density functional theory

    NASA Astrophysics Data System (ADS)

    Wittmann, René; Sitta, Christoph E.; Smallenburg, Frank; Löwen, Hartmut

    2017-10-01

    A density functional theory for the bulk phase diagram of two-dimensional orientable hard rods is proposed and tested against Monte Carlo computer simulation data. In detail, an explicit density functional is derived from fundamental mixed measure theory and freely minimized numerically for hard discorectangles. The phase diagram, which involves stable isotropic, nematic, smectic, and crystalline phases, is obtained and shows good agreement with the simulation data. Our functional is valid for a multicomponent mixture of hard particles with arbitrary convex shapes and provides a reliable starting point to explore various inhomogeneous situations of two-dimensional hard rods and their Brownian dynamics.

  6. Random search optimization based on genetic algorithm and discriminant function

    NASA Technical Reports Server (NTRS)

    Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.

    1990-01-01

    The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.

  7. The Translated Dowling Polynomials and Numbers.

    PubMed

    Mangontarum, Mahid M; Macodi-Ringia, Amila P; Abdulcarim, Normalah S

    2014-01-01

    More properties for the translated Whitney numbers of the second kind such as horizontal generating function, explicit formula, and exponential generating function are proposed. Using the translated Whitney numbers of the second kind, we will define the translated Dowling polynomials and numbers. Basic properties such as exponential generating functions and explicit formula for the translated Dowling polynomials and numbers are obtained. Convexity, integral representation, and other interesting identities are also investigated and presented. We show that the properties obtained are generalizations of some of the known results involving the classical Bell polynomials and numbers. Lastly, we established the Hankel transform of the translated Dowling numbers.

  8. Efficient 3D multi-region prostate MRI segmentation using dual optimization.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.

  9. Real-Time Generation of the Footprints both on Floor and Ground

    NASA Astrophysics Data System (ADS)

    Hirano, Yousuke; Tanaka, Toshimitsu; Sagawa, Yuji

    This paper presents a real-time method for generating various footprints in relation to state of walking. In addition, the method is expanded to cover both on hard floor and soft ground. Results of the previous method were not so realistic, because the method places same simple foot prints on the motion path. Our method runs filters on the original pattern of footprint on GPU. And then our method gradates intensity of the pattern to two directions, in order to create partially dark footprints. Here parameters of the filter and the gradation are changed by move speed and direction. The pattern is mapped on a polygon. If the walker is pigeon-toed or bandy-legged, the polygon is rotated inside or outside, respectively. Finally, it is placed on floor. Footprints on soft ground are concavity and convexity caused by walking. Thus an original pattern of footprints on ground is defined as a height map. The height map is modified using the filter and the gradation operation developed for floor footprints. The height map is converted to a bump map to fast display the concavity and convexity of footprints.

  10. Improved flight-simulator viewing lens

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M.

    1979-01-01

    Triplet lens system uses two acrylic plastic double convex lenses and one polystyrene plastic single convex lens to reduce chromatic distortion and lateral aberation, especially at large field angles within in-line systems of flight simulators.

  11. Interface Shape Control Using Localized Heating during Bridgman Growth

    NASA Technical Reports Server (NTRS)

    Volz, M. P.; Mazuruk, K.; Aggarwal, M. D.; Croll, A.

    2008-01-01

    Numerical calculations were performed to assess the effect of localized radial heating on the melt-crystal interface shape during vertical Bridgman growth. System parameters examined include the ampoule, melt and crystal thermal conductivities, the magnitude and width of localized heating, and the latent heat of crystallization. Concave interface shapes, typical of semiconductor systems, could be flattened or made convex with localized heating. Although localized heating caused shallower thermal gradients ahead of the interface, the magnitude of the localized heating required for convexity was less than that which resulted in a thermal inversion ahead of the interface. A convex interface shape was most readily achieved with ampoules of lower thermal conductivity. Increasing melt convection tended to flatten the interface, but the amount of radial heating required to achieve a convex interface was essentially independent of the convection intensity.

  12. Stable donutlike vortex beam generation from lasers with controlled Ince-Gaussian modes

    NASA Astrophysics Data System (ADS)

    Chu, Shu-Chun; Otsuka, Kenju

    2007-11-01

    This study proposes a three-lens configuration for generating a stable donutlike vortex laser beam with controlled Ince-Gaussian mode (IGM) operation in the model of laser-diode (LD)-pumped solid-state lasers. Simply controlling the lateral off-axis position of the pump beam's focus on the laser crystal can generate a desired donutlike vortex beam from the proposed simple and easily made three-lens configuration, a proposed astigmatic mode converter assembled into one body with a concave-convex laser cavity.

  13. Polyhedral sweeping processes with unbounded nonconvex-valued perturbation

    NASA Astrophysics Data System (ADS)

    Tolstonogov, A. A.

    2017-12-01

    A polyhedral sweeping process with a multivalued perturbation whose values are nonconvex unbounded sets is studied in a separable Hilbert space. Polyhedral sweeping processes do not satisfy the traditional assumptions used to prove existence theorems for convex sweeping processes. We consider the polyhedral sweeping process as an evolution inclusion with subdifferential operators depending on time. The widely used assumption of Lipschitz continuity for the multivalued perturbation term is replaced by a weaker notion of (ρ - H) Lipschitzness. The existence of solutions is proved for this sweeping process.

  14. The formal de Rham complex

    NASA Astrophysics Data System (ADS)

    Zharinov, V. V.

    2013-02-01

    We propose a formal construction generalizing the classic de Rham complex to a wide class of models in mathematical physics and analysis. The presentation is divided into a sequence of definitions and elementary, easily verified statements; proofs are therefore given only in the key case. Linear operations are everywhere performed over a fixed number field {F} = {R},{C}. All linear spaces, algebras, and modules, although not stipulated explicitly, are by definition or by construction endowed with natural locally convex topologies, and their morphisms are continuous.

  15. Another short and elementary proof of strong subadditivity of quantum entropy

    NASA Astrophysics Data System (ADS)

    Ruskai, Mary Beth

    2007-08-01

    A short and elementary proof of the joint convexity of relative entropy is presented, using nothing beyond linear algebra. The key ingredients are an easily verified integral representation and the strategy used to prove the Cauchy-Schwarz inequality in elementary courses. Several consequences are proved in a way which allows an elementary proof of strong subadditivity in a few more lines. Some expository material on Schwarz inequalities for operators and the Holevo bound for partial measurements is also included.

  16. Wood industrial application for quality control using image processing

    NASA Astrophysics Data System (ADS)

    Ferreira, M. J. O.; Neves, J. A. C.

    1994-11-01

    This paper describes an application of image processing for the furniture industry. It uses an input data, images acquired directly from wood planks where defects were previously marked by an operator. A set of image processing algorithms separates and codes each defect and detects a polygonal approach of the line representing them. For such a purpose we developed a pattern classification algorithm and a new technique of segmenting defects by carving the convex hull of the binary shape representing each isolated defect.

  17. Higher order solution of the Euler equations on unstructured grids using quadratic reconstruction

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Frederickson, Paul O.

    1990-01-01

    High order accurate finite-volume schemes for solving the Euler equations of gasdynamics are developed. Central to the development of these methods are the construction of a k-exact reconstruction operator given cell-averaged quantities and the use of high order flux quadrature formulas. General polygonal control volumes (with curved boundary edges) are considered. The formulations presented make no explicit assumption as to complexity or convexity of control volumes. Numerical examples are presented for Ringleb flow to validate the methodology.

  18. Integrating UniTree with the data migration API

    NASA Technical Reports Server (NTRS)

    Schrodel, David G.

    1994-01-01

    The Data Migration Application Programming Interface (DMAPI) has the potential to allow developers of open systems Hierarchical Storage Management (HSM) products to virtualize native file systems without the requirement to make changes to the underlying operating system. This paper describes advantages of virtualizing native file systems in hierarchical storage management systems, the DMAPI at a high level, what the goals are for the interface, and the integration of the Convex UniTree+HSM with DMAPI along with some of the benefits derived in the resulting product.

  19. Preconditioning 2D Integer Data for Fast Convex Hull Computations.

    PubMed

    Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

  20. Shear thickening and jamming in suspensions of different particle shapes

    NASA Astrophysics Data System (ADS)

    Brown, Eric; Zhang, Hanjun; Forman, Nicole; Betts, Douglas; Desimone, Joseph; Maynor, Benjamin; Jaeger, Heinrich

    2012-02-01

    We investigated the role of particle shape on shear thickening and jamming in densely packed suspensions. Various particle shapes were fabricated including rods of different aspect ratios and non-convex hooked rods. A rheometer was used to measure shear stress vs. shear rate for a wide range of packing fractions for each shape. Each suspensions exhibits qualitatively similar Discontinuous Shear Thickening, in which the logarithmic slope of the stress vs. shear rate has the same scaling for each convex shape and diverges at a critical packing fraction φc. The value of φc varies with particle shape, and coincides with the onset of a yield stress, a.k.a. the jamming transition. This suggests the jamming transition controls shear thickening, and the only effect of particle shape on steady state bulk rheology of convex particles is a shift of φc. Intriguingly, viscosity curves for non-convex particles do not collapse on the same set as convex particles, showing strong shear thickening over a wider range of packing fraction. Qualitative shape dependence was only found in steady state rheology when the system was confined to small gaps where large aspect ratio particle are forced to order.

  1. Nonexistence of stable F-stationary maps of a functional related to pullback metrics.

    PubMed

    Li, Jing; Liu, Fang; Zhao, Peibiao

    2017-01-01

    Let [Formula: see text] be a compact convex hypersurface in [Formula: see text]. In this paper, we prove that if the principal curvatures [Formula: see text] of [Formula: see text] satisfy [Formula: see text] and [Formula: see text], then there exists no nonconstant stable F -stationary map between M and a compact Riemannian manifold when (6) or (7) holds.

  2. A general decay result of a viscoelastic equation with past history and boundary feedback

    NASA Astrophysics Data System (ADS)

    Messaoudi, Salim A.; Al-Gharabli, Mohammad M.

    2015-08-01

    In this paper, we consider a viscoelastic equation with a nonlinear feedback localized on a part of the boundary and in the presence of infinite memory term. In the domain as well as on a part of the boundary, we use the multiplier method and some properties of the convex functions to prove an explicit and general decay result.

  3. Design of see-through near-eye display for presbyopia.

    PubMed

    Wu, Yishi; Chen, Chao Ping; Zhou, Lei; Li, Yang; Yu, Bing; Jin, Huayi

    2017-04-17

    We propose a compact design of see-through near-eye display that is dedicated to presbyopia. Our solution is characterized by a plano-convex waveguide, which is essentially an integration of a corrective lens and two volume holograms. Its design rules are set forth in detail, followed by the results and discussion regarding the diffraction efficiency, field of view, modulation transfer function, distortion, and simulated imaging.

  4. Convex Curved Crystal Spectograph for Pulsed Plasma Sources.

    DTIC Science & Technology

    The geometry of a convex curved crystal spectrograph as applied to pulsed plasma sources is presented. Also presented are data from the dense plasma focus with particular emphasis on the absolute intensity of line radiations.

  5. Optimal boundary regularity for a singular Monge-Ampère equation

    NASA Astrophysics Data System (ADS)

    Jian, Huaiyu; Li, You

    2018-06-01

    In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.

  6. Compliant tactile sensor that delivers a force vector

    NASA Technical Reports Server (NTRS)

    Torres-Jara, Eduardo (Inventor)

    2010-01-01

    Tactile Sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector. The applied force vector has three components to establish the direction and magnitude of an applied force. The compliant convex surface defines a dome with a hollow interior and has a linear relation between displacement and load including a magnet disposed substantially at the center of the dome above a sensor array that responds to magnetic field intensity.

  7. The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons

    NASA Astrophysics Data System (ADS)

    Kweon, Jae Ryong

    2017-03-01

    In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.

  8. Nonconvex model predictive control for commercial refrigeration

    NASA Astrophysics Data System (ADS)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  9. High-power terahertz quantum cascade lasers with ∼0.23 W in continuous wave mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xuemin; Shen, Changle; Jiang, Tao

    2016-07-15

    Terahertz quantum cascade lasers with a record output power up to ∼0.23 W in continuous wave mode were obtained. We show that the optimal 2.9-mm-long device operating at 3.11 THz has a low threshold current density of 270 A/cm{sup 2} at ∼15 K. The maximum operating temperature arrived at ∼65 K in continuous wave mode and the internal quantum efficiencies decreased from 0.53 to 0.19 for the devices with different cavity lengths. By using one convex lens with the effective focal length of 13 mm, the beam profile was collimated to be a quasi Gaussian distribution.

  10. Worst case estimation of homology design by convex analysis

    NASA Technical Reports Server (NTRS)

    Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.

    1998-01-01

    The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.

  11. Transient disturbance growth in flows over convex surfaces

    NASA Astrophysics Data System (ADS)

    Karp, Michael; Hack, M. J. Philipp

    2017-11-01

    Flows over curved surfaces occur in a wide range of applications including airfoils, compressor and turbine vanes as well as aerial, naval and ground vehicles. In most of these applications the surface has convex curvature, while concave surfaces are less common. Since monotonic boundary-layer flows over convex surfaces are exponentially stable, they have received considerably less attention than flows over concave walls which are destabilized by centrifugal forces. Non-modal mechanisms may nonetheless enable significant disturbance growth which can make the flow susceptible to secondary instabilities. A parametric investigation of the transient growth and secondary instability of flows over convex surfaces is performed. The specific conditions yielding the maximal transient growth and strongest instability are identified. The effect of wall-normal and spanwise inflection points on the instability process is discussed. Finally, the role and significance of additional parameters, such as the geometry and pressure gradient, is analyzed.

  12. Clearance detector and method for motion and distance

    DOEpatents

    Xavier, Patrick G [Albuquerque, NM

    2011-08-09

    A method for correct and efficient detection of clearances between three-dimensional bodies in computer-based simulations, where one or both of the volumes is subject to translation and/or rotations. The method conservatively determines of the size of such clearances and whether there is a collision between the bodies. Given two bodies, each of which is undergoing separate motions, the method utilizes bounding-volume hierarchy representations for the two bodies and, mappings and inverse mappings for the motions of the two bodies. The method uses the representations, mappings and direction vectors to determine the directionally furthest locations of points on the convex hulls of the volumes virtually swept by the bodies and hence the clearance between the bodies, without having to calculate the convex hulls of the bodies. The method includes clearance detection for bodies comprising convex geometrical primitives and more specific techniques for bodies comprising convex polyhedra.

  13. Anomalous dynamics triggered by a non-convex equation of state in relativistic flows

    NASA Astrophysics Data System (ADS)

    Ibáñez, J. M.; Marquina, A.; Serna, S.; Aloy, M. A.

    2018-05-01

    The non-monotonicity of the local speed of sound in dense matter at baryon number densities much higher than the nuclear saturation density (n0 ≈ 0.16 fm-3) suggests the possible existence of a non-convex thermodynamics which will lead to a non-convex dynamics. Here, we explore the rich and complex dynamics that an equation of state (EoS) with non-convex regions in the pressure-density plane may develop as a result of genuinely relativistic effects, without a classical counterpart. To this end, we have introduced a phenomenological EoS, the parameters of which can be restricted owing to causality and thermodynamic stability constraints. This EoS can be regarded as a toy model with which we may mimic realistic (and far more complex) EoSs of practical use in the realm of relativistic hydrodynamics.

  14. The Lp Robin problem for Laplace equations in Lipschitz and (semi-)convex domains

    NASA Astrophysics Data System (ADS)

    Yang, Sibei; Yang, Dachun; Yuan, Wen

    2018-01-01

    Let n ≥ 3 and Ω be a bounded Lipschitz domain in Rn. Assume that p ∈ (2 , ∞) and the function b ∈L∞ (∂ Ω) is non-negative, where ∂Ω denotes the boundary of Ω. Denote by ν the outward unit normal to ∂Ω. In this article, the authors give two necessary and sufficient conditions for the unique solvability of the Robin problem for the Laplace equation Δu = 0 in Ω with boundary data ∂ u / ∂ ν + bu = f ∈Lp (∂ Ω), respectively, in terms of a weak reverse Hölder inequality with exponent p or the unique solvability of the Robin problem with boundary data in some weighted L2 (∂ Ω) space. As applications, the authors obtain the unique solvability of the Robin problem for the Laplace equation in the bounded (semi-)convex domain Ω with boundary data in (weighted) Lp (∂ Ω) for any given p ∈ (1 , ∞).

  15. Nondestructive method and apparatus for imaging grains in curved surfaces of polycrystalline articles

    DOEpatents

    Carpenter, Donald A.

    1995-01-01

    A nondestructive method, and associated apparatus, are provided for determining the grain flow of the grains in a convex curved, textured polycrystalline surface. The convex, curved surface of a polycrystalline article is aligned in a horizontal x-ray diffractometer and a monochromatic, converging x-ray beam is directed onto the curved surface of the polycrystalline article so that the converging x-ray beam is diffracted by crystallographic planes of the grains in the polycrystalline article. The diffracted x-ray beam is caused to pass through a set of horizontal, parallel slits to limit the height of the beam and thereafter. The linear intensity of the diffracted x-ray is measured, using a linear position sensitive proportional counter, as a function of position in a direction orthogonal to the counter so as to generate two dimensional data. An image of the grains in the curved surface of the polycrystalline article is provided based on the two-dimensional data.

  16. Nondestructive method and apparatus for imaging grains in curved surfaces of polycrystalline articles

    DOEpatents

    Carpenter, D.A.

    1995-05-23

    A nondestructive method, and associated apparatus, are provided for determining the grain flow of the grains in a convex curved, textured polycrystalline surface. The convex, curved surface of a polycrystalline article is aligned in a horizontal x-ray diffractometer and a monochromatic, converging x-ray beam is directed onto the curved surface of the polycrystalline article so that the converging x-ray beam is diffracted by crystallographic planes of the grains in the polycrystalline article. The diffracted x-ray beam is caused to pass through a set of horizontal, parallel slits to limit the height of the beam and thereafter. The linear intensity of the diffracted x-ray is measured, using a linear position sensitive proportional counter, as a function of position in a direction orthogonal to the counter so as to generate two dimensional data. An image of the grains in the curved surface of the polycrystalline article is provided based on the two-dimensional data. 7 Figs.

  17. Display-wide influences on figure-ground perception: the case of symmetry.

    PubMed

    Mojica, Andrew J; Peterson, Mary A

    2014-05-01

    Past research has demonstrated that convex regions are increasingly likely to be perceived as figures as the number of alternating convex and concave regions in test displays increases. This region-number effect depends on both a small preexisting preference for convex over concave objects and the presence of scene characteristics (i.e., uniform fill) that allow the integration of the concave regions into a background object/surface. These factors work together to enable the percept of convex objects in front of a background. We investigated whether region-number effects generalize to another property, symmetry, whose effectiveness as a figure property has been debated. Observers reported which regions they perceived as figures in black-and-white displays with alternating symmetric/asymmetric regions. In Experiments 1 and 2, the displays had articulated outer borders that preserved the symmetry/asymmetry of the outermost regions. Region-number effects were not observed, although symmetric regions were perceived as figures more often than chance. We hypothesized that the articulated outer borders prevented fitting a background interpretation to the asymmetric regions. In Experiment 3, we used straight-edge framelike outer borders and observed region-number effects for symmetry equivalent to those observed for convexity. These results (1) show that display-wide information affects figure assignment at a border, (2) extend the evidence indicating that the ability to fit background as well as foreground interpretations is critical in figure assignment, (3) reveal that symmetry and convexity are equally effective figure cues and, (4) demonstrate that symmetry serves as a figural property only when it is close to fixation.

  18. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  19. Trait-fitness relationships determine how trade-off shapes affect species coexistence.

    PubMed

    Ehrlich, Elias; Becks, Lutz; Gaedke, Ursula

    2017-12-01

    Trade-offs between functional traits are ubiquitous in nature and can promote species coexistence depending on their shape. Classic theory predicts that convex trade-offs facilitate coexistence of specialized species with extreme trait values (extreme species) while concave trade-offs promote species with intermediate trait values (intermediate species). We show here that this prediction becomes insufficient when the traits translate non-linearly into fitness which frequently occurs in nature, e.g., an increasing length of spines reduces grazing losses only up to a certain threshold resulting in a saturating or sigmoid trait-fitness function. We present a novel, general approach to evaluate the effect of different trade-off shapes on species coexistence. We compare the trade-off curve to the invasion boundary of an intermediate species invading the two extreme species. At this boundary, the invasion fitness is zero. Thus, it separates trait combinations where invasion is or is not possible. The invasion boundary is calculated based on measurable trait-fitness relationships. If at least one of these relationships is not linear, the invasion boundary becomes non-linear, implying that convex and concave trade-offs not necessarily lead to different coexistence patterns. Therefore, we suggest a new ecological classification of trade-offs into extreme-favoring and intermediate-favoring which differs from a purely mathematical description of their shape. We apply our approach to a well-established model of an empirical predator-prey system with competing prey types facing a trade-off between edibility and half-saturation constant for nutrient uptake. We show that the survival of the intermediate prey depends on the convexity of the trade-off. Overall, our approach provides a general tool to make a priori predictions on the outcome of competition among species facing a common trade-off in dependence of the shape of the trade-off and the shape of the trait-fitness relationships. © 2017 by the Ecological Society of America.

  20. A theoretical stochastic control framework for adapting radiotherapy to hypoxia

    NASA Astrophysics Data System (ADS)

    Saberian, Fatemeh; Ghate, Archis; Kim, Minsun

    2016-10-01

    Hypoxia, that is, insufficient oxygen partial pressure, is a known cause of reduced radiosensitivity in solid tumors, and especially in head-and-neck tumors. It is thus believed to adversely affect the outcome of fractionated radiotherapy. Oxygen partial pressure varies spatially and temporally over the treatment course and exhibits inter-patient and intra-tumor variation. Emerging advances in non-invasive functional imaging offer the future possibility of adapting radiotherapy plans to this uncertain spatiotemporal evolution of hypoxia over the treatment course. We study the potential benefits of such adaptive planning via a theoretical stochastic control framework using computer-simulated evolution of hypoxia on computer-generated test cases in head-and-neck cancer. The exact solution of the resulting control problem is computationally intractable. We develop an approximation algorithm, called certainty equivalent control, that calls for the solution of a sequence of convex programs over the treatment course; dose-volume constraints are handled using a simple constraint generation method. These convex programs are solved using an interior point algorithm with a logarithmic barrier via Newton’s method and backtracking line search. Convexity of various formulations in this paper is guaranteed by a sufficient condition on radiobiological tumor-response parameters. This condition is expected to hold for head-and-neck tumors and for other similarly responding tumors where the linear dose-response parameter is larger than the quadratic dose-response parameter. We perform numerical experiments on four test cases by using a first-order vector autoregressive process with exponential and rational-quadratic covariance functions from the spatiotemporal statistics literature to simulate the evolution of hypoxia. Our results suggest that dynamic planning could lead to a considerable improvement in the number of tumor cells remaining at the end of the treatment course. Through these simulations, we also gain insights into when and why dynamic planning is likely to yield the largest benefits.

  1. Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.

    PubMed

    Wang, Charlie C L; Manocha, Dinesh

    2013-01-01

    We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.

  2. Advantages of using newly developed quartz contact lens with slit illumination from operating microscope.

    PubMed

    Kiyokawa, Masatoshi; Sakuma, Toshiro; Hatano, Noriko; Mizota, Atsushi; Tanaka, Minoru

    2009-06-01

    The purpose of this article is to report the characteristics and advantages of using a newly designed quartz contact lens with slit illumination from an operating microscope for intraocular surgery. The new contact lens is made of quartz. The lens is convex-concave and is used in combination with slit illumination from an operating microscope. The optical properties of quartz make this lens less reflective with greater transmittance. The combination of a quartz contact lens with slit illumination provided a brighter and wider field of view than conventional lenses. This system enabled us to perform bimanual vitrectomy and scleral buckling surgery without indirect ophthalmoscope. Small intraocular structures in the posterior pole or in the periphery were detected more easily. In conclusion, the newly designed quartz lens with slit beam illumination from an operating microscope provided a bright, clear and wide surgical field, and allowed intraocular surgery to be performed more easily.

  3. Computing convex quadrangulations☆

    PubMed Central

    Schiffer, T.; Aurenhammer, F.; Demuth, M.

    2012-01-01

    We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set. PMID:22389540

  4. Design and measurement of a TE{sub 13} input converter for high order mode gyrotron travelling wave amplifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Liu, Guo, E-mail: liuguo@uestc.edu.cn; Shu, Guoxiang

    2016-03-15

    A technique to launch a circular TE{sub 13} mode to interact with the helical electron beam of a gyrotron travelling wave amplifier is proposed and verified by simulation and cold test in this paper. The high order (HOM) TE{sub 13} mode is excited by a broadband Y-type power divider with the aid of a cylindrical waveguide system. Using grooves and convex strips loaded at the lateral planes of the output cylindrical waveguide, the electric fields of the potential competing TE{sub 32} and TE{sub 71} modes are suppressed to allow the transmission of the dominant TE{sub 13} mode. The converter performancemore » for different structural dimensions of grooves and convex strips is studied in detail and excellent results have been achieved. Simulation predicts that the average transmission is ∼−1.8 dB with a 3 dB bandwidth of 7.2 GHz (91.5–98.7 GHz) and port reflection is less than −15 dB. The conversion efficiency to the TE{sub 32} and TE{sub 71} modes are, respectively, under −15 dB and −24 dB in the operating frequency band. Such an HOM converter operating at W-band has been fabricated and cold tested with the radiation boundary. Measurement from the vector network analyzer cold test and microwave simulations show a good reflection performance for the converter.« less

  5. Joint pricing and production management: a geometric programming approach with consideration of cubic production cost function

    NASA Astrophysics Data System (ADS)

    Sadjadi, Seyed Jafar; Hamidi Hesarsorkh, Aghil; Mohammadi, Mehdi; Bonyadi Naeini, Ali

    2015-06-01

    Coordination and harmony between different departments of a company can be an important factor in achieving competitive advantage if the company corrects alignment between strategies of different departments. This paper presents an integrated decision model based on recent advances of geometric programming technique. The demand of a product considers as a power function of factors such as product's price, marketing expenditures, and consumer service expenditures. Furthermore, production cost considers as a cubic power function of outputs. The model will be solved by recent advances in convex optimization tools. Finally, the solution procedure is illustrated by numerical example.

  6. The Singularity Mystery Associated with a Radially Continuous Maxwell Viscoelastic Structure

    NASA Technical Reports Server (NTRS)

    Fang, Ming; Hager, Bradford H.

    1995-01-01

    The singularity problem associated with a radially continuous Maxwell viscoclastic structure is investigated. A special tool called the isolation function is developed. Results calculated using the isolation function show that the discrete model assumption is no longer valid when the viscoelastic parameter becomes a continuous function of radius. Continuous variations in the upper mantle viscoelastic parameter are especially powerful in destroying the mode-like structures. The contribution to the load Love numbers of the singularities is sensitive to the convexity of the viscoelastic parameter models. The difference between the vertical response and the horizontal response found in layered viscoelastic parameter models remains with continuous models.

  7. Compliant tactile sensor for generating a signal related to an applied force

    NASA Technical Reports Server (NTRS)

    Torres-Jara, Eduardo (Inventor)

    2012-01-01

    Tactile sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector.

  8. Systems of nonlinear algebraic equations with positive solutions.

    PubMed

    Ciurte, Anca; Nedevschi, Sergiu; Rasa, Ioan

    2017-01-01

    We are concerned with the positive solutions of an algebraic system depending on a parameter [Formula: see text] and arising in economics. For [Formula: see text] we prove that the system has at least a solution. For [Formula: see text] we give three proofs of the existence and a proof of the uniqueness of the solution. Brouwer's theorem and inequalities involving convex functions are essential tools in our proofs.

  9. SATA Stochastic Algebraic Topology and Applications

    DTIC Science & Technology

    2017-01-23

    Harris et al. Selective sampling after solving a convex problem". arXiv:1609.05609 [ math , stat] (Sept. 2016). arXiv: 1609.05609. 13. Baryshnikov...Functions, Adv. Math . 245, 573-586, 2014. 15. Y. Baryshnikov, Liberzon, Daniel,Robust stability conditions for switched linear systems: Commutator bounds...Consistency via Kernel Estimation, arXiv:1407.5272 [ math , stat] (July 2014) arXiv: 1407.5272. to appear in Bernoulli 18. O.Bobrowski and S.Weinberger

  10. Attitude Estimation for Unresolved Agile Space Objects with Shape Model Uncertainty

    DTIC Science & Technology

    2012-09-01

    Simulated lightcurve data using the Cook-Torrance [8] Bidirectional Reflectivity Distribution Function ( BRDF ) model was first applied in a batch estimation...framework to ellipsoidal SO models in geostationary orbits [9]. The Ashikhmin-Shirley [10] BRDF has also been used to study estimation of specular...non-convex 300 facet model and simulated lightcurves using a combination of Lambertian and Cook-Torrance (specular) BRDF models with an Unscented

  11. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  12. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  13. The role of spinal concave–convex biases in the progression of idiopathic scoliosis

    PubMed Central

    Driscoll, Mark; Moreau, Alain; Villemure, Isabelle; Parent, Stefan

    2009-01-01

    Inadequate understanding of risk factors involved in the progression of idiopathic scoliosis restrains initial treatment to observation until the deformity shows signs of significant aggravation. The purpose of this analysis is to explore whether the concave–convex biases associated with scoliosis (local degeneration of the intervertebral discs, nucleus migration, and local increase in trabecular bone-mineral density of vertebral bodies) may be identified as progressive risk factors. Finite element models of a 26° right thoracic scoliotic spine were constructed based on experimental and clinical observations that included growth dynamics governed by mechanical stimulus. Stress distribution over the vertebral growth plates, progression of Cobb angles, and vertebral wedging were explored in models with and without the biases of concave–convex properties. The inclusion of the bias of concave–convex properties within the model both augmented the asymmetrical loading of the vertebral growth plates by up to 37% and further amplified the progression of Cobb angles and vertebral wedging by as much as 5.9° and 0.8°, respectively. Concave–convex biases are factors that influence the progression of scoliotic curves. Quantifying these parameters in a patient with scoliosis may further provide a better clinical assessment of the risk of progression. PMID:19130096

  14. Organizing principles for dense packings of nonspherical hard particles: Not all shapes are created equal

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore; Jiao, Yang

    2012-07-01

    We have recently devised organizing principles to obtain maximally dense packings of the Platonic and Archimedean solids and certain smoothly shaped convex nonspherical particles [Torquato and Jiao, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.041310 81, 041310 (2010)]. Here we generalize them in order to guide one to ascertain the densest packings of other convex nonspherical particles as well as concave shapes. Our generalized organizing principles are explicitly stated as four distinct propositions. All of our organizing principles are applied to and tested against the most comprehensive set of both convex and concave particle shapes examined to date, including Catalan solids, prisms, antiprisms, cylinders, dimers of spheres, and various concave polyhedra. We demonstrate that all of the densest known packings associated with this wide spectrum of nonspherical particles are consistent with our propositions. Among other applications, our general organizing principles enable us to construct analytically the densest known packings of certain convex nonspherical particles, including spherocylinders, “lens-shaped” particles, square pyramids, and rhombic pyramids. Moreover, we show how to apply these principles to infer the high-density equilibrium crystalline phases of hard convex and concave particles. We also discuss the unique packing attributes of maximally random jammed packings of nonspherical particles.

  15. Preconditioning 2D Integer Data for Fast Convex Hull Computations

    PubMed Central

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221

  16. Convex Formulations of Learning from Crowds

    NASA Astrophysics Data System (ADS)

    Kajino, Hiroshi; Kashima, Hisashi

    It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.

  17. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  18. Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays.

    PubMed

    Rakkiyappan, R; Sakthivel, N; Cao, Jinde

    2015-06-01

    This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  20. Perceived facial changes of Class II Division 1 patients with convex profiles after functional orthopedic treatment followed by fixed orthodontic appliances.

    PubMed

    Tsiouli, Kleopatra; Topouzelis, Nikolaos; Papadopoulos, Moschos A; Gkantidis, Nikolaos

    2017-07-01

    The aim of this research was to investigate the perceived facial changes in Class II Division 1 patients with convex profiles after functional orthopedic treatment followed by fixed orthodontic appliances. Pretreatment and posttreatment profile photographs of 12 Class II Division 1 patients treated with activators, 12 Class II Division 1 patients treated with Twin-block appliances, and 12 controls with normal profiles treated without functional appliances were presented in pairs to 10 orthodontists, 10 patients, 10 parents, and 10 laypersons. The raters assessed changes in facial appearance on a visual analog scale. Two-way multivariate analysis of variance was used to evaluate differences among group ratings. Intrarater reliability was strong in most cases (intraclass correlation coefficients, >0.7). The internal consistency of the assessments was high (alpha, >0.87), both within and between groups. The raters consistently perceived more positive changes in the Class II Division 1 groups compared with the control group. However, this difference hardly exceeded 1/10th of the total visual analog scale length in its highest value and was mostly evident in the lower face and chin. No significant differences were found between the activator and the Twin-block groups. Although the raters perceived improvements of the facial profiles after functional orthopedic treatment followed by fixed orthodontic appliances, these were quite limited. Thus, orthodontists should be tentative when predicting significant improvement of a patient's profile with this treatment option. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  1. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    PubMed

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Adaptive multiregression in reproducing kernel Hilbert spaces: the multiaccess MIMO channel case.

    PubMed

    Slavakis, Konstantinos; Bouboulis, Pantelis; Theodoridis, Sergios

    2012-02-01

    This paper introduces a wide framework for online, i.e., time-adaptive, supervised multiregression tasks. The problem is formulated in a general infinite-dimensional reproducing kernel Hilbert space (RKHS). In this context, a fairly large number of nonlinear multiregression models fall as special cases, including the linear case. Any convex, continuous, and not necessarily differentiable function can be used as a loss function in order to quantify the disagreement between the output of the system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. To this end, we demonstrate a way to calculate the subgradients of robust loss functions, suitable for the multiregression task. As it is by now well documented, when dealing with online schemes in RKHS, the memory keeps increasing with each iteration step. To attack this problem, a simple sparsification strategy is utilized, which leads to an algorithmic scheme of linear complexity with respect to the number of unknown parameters. A convergence analysis of the technique, based on arguments of convex analysis, is also provided. To demonstrate the capacity of the proposed method, the multiregressor is applied to the multiaccess multiple-input multiple-output channel equalization task for a setting with poor resources and nonavailable channel information. Numerical results verify the potential of the method, when its performance is compared with those of the state-of-the-art linear techniques, which, in contrast, use space-time coding, more antenna elements, as well as full channel information.

  3. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  5. Asteroid shape and spin statistics from convex models

    NASA Astrophysics Data System (ADS)

    Torppa, J.; Hentunen, V.-P.; Pääkkönen, P.; Kehusmaa, P.; Muinonen, K.

    2008-11-01

    We introduce techniques for characterizing convex shape models of asteroids with a small number of parameters, and apply these techniques to a set of 87 models from convex inversion. We present three different approaches for determining the overall dimensions of an asteroid. With the first technique, we measured the dimensions of the shapes in the direction of the rotation axis and in the equatorial plane and with the two other techniques, we derived the best-fit ellipsoid. We also computed the inertia matrix of the model shape to test how well it represents the target asteroid, i.e., to find indications of possible non-convex features or albedo variegation, which the convex shape model cannot reproduce. We used shape models for 87 asteroids to perform statistical analyses and to study dependencies between shape and rotation period, size, and taxonomic type. We detected correlations, but more data are required, especially on small and large objects, as well as slow and fast rotators, to reach a more thorough understanding about the dependencies. Results show, e.g., that convex models of asteroids are not that far from ellipsoids in root-mean-square sense, even though clearly irregular features are present. We also present new spin and shape solutions for Asteroids (31) Euphrosyne, (54) Alexandra, (79) Eurynome, (93) Minerva, (130) Elektra, (376) Geometria, (471) Papagena, and (776) Berbericia. We used a so-called semi-statistical approach to obtain a set of possible spin state solutions. The number of solutions depends on the abundancy of the data, which for Eurynome, Elektra, and Geometria was extensive enough for determining an unambiguous spin and shape solution. Data of Euphrosyne, on the other hand, provided a wide distribution of possible spin solutions, whereas the rest of the targets have two or three possible solutions.

  6. Directional Convexity and Finite Optimality Conditions.

    DTIC Science & Technology

    1984-03-01

    system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United

  7. Localized Multiple Kernel Learning A Convex Approach

    DTIC Science & Technology

    2016-11-22

    data. All the aforementioned approaches to localized MKL are formulated in terms of non-convex optimization problems, and deep the- oretical...learning. IEEE Transactions on Neural Networks, 22(3):433–446, 2011. Jingjing Yang, Yuanning Li, Yonghong Tian, Lingyu Duan, and Wen Gao. Group-sensitive

  8. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  9. Strain relaxation in convex-graded InxAl1-xAs (x = 0.05-0.79) metamorphic buffer layers grown by molecular beam epitaxy on GaAs(001)

    NASA Astrophysics Data System (ADS)

    Solov'ev, V. A.; Chernov, M. Yu; Baidakova, M. V.; Kirilenko, D. A.; Yagovkina, M. A.; Sitnikova, A. A.; Komissarova, T. A.; Kop'ev, P. S.; Ivanov, S. V.

    2018-01-01

    This paper presents a study of structural properties of InGaAs/InAlAs quantum well (QW) heterostructures with convex-graded InxAl1-xAs (x = 0.05-0.79) metamorphic buffer layers (MBLs) grown by molecular beam epitaxy on GaAs substrates. Mechanisms of elastic strain relaxation in the convex-graded MBLs were studied by the X-ray reciprocal space mapping combined with the data of spatially-resolved selected area electron diffraction implemented in a transmission electron microscope. The strain relaxation degree was approximated for the structures with different values of an In step-back. Strong contribution of the strain relaxation via lattice tilt in addition to the formation of the misfit dislocations has been observed for the convex-graded InAlAs MBL, which results in a reduced threading dislocation density in the QW region as compared to a linear-graded MBL.

  10. Liquid phase heteroepitaxial growth on convex substrate using binary phase field crystal model

    NASA Astrophysics Data System (ADS)

    Lu, Yanli; Zhang, Tinghui; Chen, Zheng

    2018-06-01

    The liquid phase heteroepitaxial growth on convex substrate is investigated with the binary phase field crystal (PFC) model. The paper aims to focus on the transformation of the morphology of epitaxial films on convex substrate with two different radiuses of curvature (Ω) as well as influences of substrate vicinal angles on films growth. It is found that films growth experience different stages on convex substrate with different radiuses of curvature (Ω). For Ω = 512 Δx , the process of epitaxial film growth includes four stages: island coupled with layer-by-layer growth, layer-by-layer growth, island coupled with layer-by-layer growth, layer-by-layer growth. For Ω = 1024 Δx , film growth only experience islands growth and layer-by-layer growth. Also, substrate vicinal angle (π) is an important parameter for epitaxial film growth. We find the film can grow well when π = 2° for Ω = 512 Δx , while the optimized film can be obtained when π = 4° for Ω = 512 Δx .

  11. Torsional deformity of apical vertebra in adolescent idiopathic scoliosis.

    PubMed

    Kotwicki, Tomasz; Napiontek, Marek

    2002-01-01

    CT scans of structural thoracic idiopathic scoliosis were reviewed in nine patients admitted to our department for scoliosis surgery. The apical vertebra scans were chosen and the following parameters were evaluated: 1) alpha angle formed by the axis of vertebra and the axis of spinous process 2) beta concave and beta convex angle between the spinous process and the left and right transverse process, respectively, 3) gamma concave and gamma convex angle between the axis of vertebra and the left and right transverse process, respectively, 4) the rotation angle to the sagittal plane. The constant deviation of the spinous process towards the convex side of the curve was observed. The vertebral body itself was distorted towards the concavity of the curve. The angle between the spinous process and the transverse process was smaller on the convex side of the curve. The torsional, intravertebral deformity of the apical vertebra was a factor acting in the direction opposite to the rotation, in the sense to reduce the deformity of the spine in idiopathic scoliosis.

  12. An axial temperature profile curvature criterion for the engineering of convex crystal growth interfaces in Bridgman systems

    NASA Astrophysics Data System (ADS)

    Peterson, Jeffrey H.; Derby, Jeffrey J.

    2017-06-01

    A unifying idea is presented for the engineering of convex melt-solid interface shapes in Bridgman crystal growth systems. Previous approaches to interface control are discussed with particular attention paid to the idea of a "booster" heater. Proceeding from the idea that a booster heater promotes a converging heat flux geometry and from the energy conservation equation, we show that a convex interface shape will naturally result when the interface is located in regions of the furnace where the axial thermal profile exhibits negative curvature, i.e., where d2 T / dz2 < 0 . This criterion is effective in explaining prior literature results on interface control and promising for the evaluation of new furnace designs. We posit that the negative curvature criterion may be applicable to the characterization of growth systems via temperature measurements in an empty furnace, providing insight about the potential for achieving a convex interface shape, without growing a crystal or conducting simulations.

  13. New Convex and Spherical Structures of Bare Boron Clusters

    NASA Astrophysics Data System (ADS)

    Boustani, Ihsan

    1997-10-01

    New stable structures of bare boron clusters can easily be obtained and constructed with the help of an "Aufbau Principle" suggested by a systematicab initioHF-SCF and direct CI study. It is concluded that boron cluster formation can be established by elemental units of pentagonal and hexagonal pyramids. New convex and small spherical clusters different from the classical known forms of boron crystal structures are obtained by a combination of both basic units. Convex structures simulate boron surfaces which can be considered as segments of open or closed spheres. Both convex clusters B16and B46have energies close to those of their conjugate quasi-planar clusters, which are relatively stable and can be considered to act as a calibration mark. The closed spherical clusters B12, B22, B32, and B42are less stable than the corresponding conjugated quasi-planar structures. As a consequence, highly stable spherical boron clusters can systematically be predicted when their conjugate quasi-planar clusters are determined and energies are compared.

  14. Scaling of Convex Hull Volume to Body Mass in Modern Primates, Non-Primate Mammals and Birds

    PubMed Central

    Brassey, Charlotte A.; Sellers, William I.

    2014-01-01

    The volumetric method of ‘convex hulling’ has recently been put forward as a mass prediction technique for fossil vertebrates. Convex hulling involves the calculation of minimum convex hull volumes (vol CH) from the complete mounted skeletons of modern museum specimens, which are subsequently regressed against body mass (M b) to derive predictive equations for extinct species. The convex hulling technique has recently been applied to estimate body mass in giant sauropods and fossil ratites, however the biomechanical signal contained within vol CH has remained unclear. Specifically, when vol CH scaling departs from isometry in a group of vertebrates, how might this be interpreted? Here we derive predictive equations for primates, non-primate mammals and birds and compare the scaling behaviour of M b to vol CH between groups. We find predictive equations to be characterised by extremely high correlation coefficients (r 2 = 0.97–0.99) and low mean percentage prediction error (11–20%). Results suggest non-primate mammals scale body mass to vol CH isometrically (b = 0.92, 95%CI = 0.85–1.00, p = 0.08). Birds scale body mass to vol CH with negative allometry (b = 0.81, 95%CI = 0.70–0.91, p = 0.011) and apparent density (vol CH/M b) therefore decreases with mass (r 2 = 0.36, p<0.05). In contrast, primates scale body mass to vol CH with positive allometry (b = 1.07, 95%CI = 1.01–1.12, p = 0.05) and apparent density therefore increases with size (r 2 = 0.46, p = 0.025). We interpret such departures from isometry in the context of the ‘missing mass’ of soft tissues that are excluded from the convex hulling process. We conclude that the convex hulling technique can be justifiably applied to the fossil record when a large proportion of the skeleton is preserved. However we emphasise the need for future studies to quantify interspecific variation in the distribution of soft tissues such as muscle, integument and body fat. PMID:24618736

  15. Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.

    PubMed

    Khoo, Y; Singer, A; Cowburn, D

    2017-07-01

    We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.

  16. Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems

    NASA Astrophysics Data System (ADS)

    Hazra, Abhik; Das, Saborni; Basu, Mousumi

    2018-06-01

    This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.

  17. Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems

    NASA Astrophysics Data System (ADS)

    Hazra, Abhik; Das, Saborni; Basu, Mousumi

    2018-03-01

    This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.

  18. A Note on the Asymptotic Behavior of Nonlinear Semigroups and the Range of Accretive Operators.

    DTIC Science & Technology

    1981-04-01

    Crandall (see [2, p. 166]) and Pazy [10) in Hilbert space. For recent developments in Ranach spaces see the papers by Kohlberg and Neyman [8, 9] and...essentially due to Kohlberg and Neyman [91 who use a different argument. They also show that if E is not reflexive and strictly convex (or if E* is...ACKNOWLEDGMENTS. I am grateful to Professor A. Pazy for several helpful conversations. I also wish to thank 5. Kohlberg , A. Neyman and A. T. Plant for

  19. Impact of trailing edge shape on the wake and propulsive performance of pitching panels

    NASA Astrophysics Data System (ADS)

    Van Buren, T.; Floryan, D.; Brunner, D.; Senturk, U.; Smits, A. J.

    2017-01-01

    The effects of changing the trailing edge shape on the wake and propulsive performance of a pitching rigid panel are examined experimentally. The panel aspect ratio is AR=1 , and the trailing edges are symmetric chevron shapes with convex and concave orientations of varying degree. Concave trailing edges delay the natural vortex bending and compression of the wake, and the mean streamwise velocity field contains a single jet. Conversely, convex trailing edges promote wake compression and produce a quadfurcated wake with four jets. As the trailing edge shape changes from the most concave to the most convex, the thrust and efficiency increase significantly.

  20. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  1. Comparison of thawing and freezing dark energy parametrizations

    NASA Astrophysics Data System (ADS)

    Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.

    2016-05-01

    Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.

  2. Estimation of Saxophone Control Parameters by Convex Optimization.

    PubMed

    Wang, Cheng-I; Smyth, Tamara; Lipton, Zachary C

    2014-12-01

    In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value.

  3. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  4. On the polarizability dyadics of electrically small, convex objects

    NASA Astrophysics Data System (ADS)

    Lakhtakia, Akhlesh

    1993-11-01

    This communication on the polarizability dyadics of electrically small objects of convex shapes has been prompted by a recent paper published by Sihvola and Lindell on the polarizability dyadic of an electrically gyrotropic sphere. A mini-review of recent work on polarizability dyadics is appended.

  5. Development of Analysis Tools for Certification of Flight Control Laws

    DTIC Science & Technology

    2009-03-31

    In Proc. Conf. on Decision and Control, pages 881-886, Bahamas, 2004. [7] G. Chesi, A. Garulli, A. Tesi , and A. Vicino. LMI-based computation of...Minneapolis, MN, 2006, pp. 117-122. [10] G. Chesi, A. Garulli, A. Tesi . and A. Vicino, "LMI-based computation of optimal quadratic Lyapunov functions...Convex Optimization. Cambridge Univ. Press. Chesi, G., A. Garulli, A. Tesi and A. Vicino (2005). LMI-based computation of optimal quadratic Lyapunov

  6. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  7. More memory under evolutionary learning may lead to chaos

    NASA Astrophysics Data System (ADS)

    Diks, Cees; Hommes, Cars; Zeppini, Paolo

    2013-02-01

    We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, J.V.

    The published work on exact penalization is indeed vast. Recently this work has indicated an intimate relationship between exact penalization, Lagrange multipliers, and problem stability or calmness. In the present work we chronicle this development within a simple idealized problem framework, wherein we unify, extend, and refine much of the known theory. In particular, most of the foundations for constrained optimization are developed with the aid of exact penalization techniques. Our approach is highly geometric and is based upon the elementary subdifferential theory for distance functions. It is assumed that the reader is familiar with the theory of convex setsmore » and functions. 54 refs.« less

  9. Lateral facial profile may reveal the risk for sleep disordered breathing in children--the PANIC-study.

    PubMed

    Ikävalko, Tiina; Närhi, Matti; Lakka, Timo; Myllykangas, Riitta; Tuomilehto, Henri; Vierola, Anu; Pahkala, Riitta

    2015-01-01

    To evaluate the lateral view photography of the face as a tool for assessing morphological properties (i.e. facial convexity) as a risk factor for sleep disordered breathing (SDB) in children and to test how reliably oral health and non-oral healthcare professionals can visually discern the lateral profile of the face from the photographs. The present study sample consisted of 382 children 6-8 years of age who were participants in the Physical Activity and Nutrition in Children (PANIC) Study. Sleep was assessed by a sleep questionnaire administered by the parents. SDB was defined as apnoeas, frequent or loud snoring or nocturnal mouth breathing observed by the parents. The facial convexity was assessed with three different methods. First, it was clinically evaluated by the reference orthodontist (T.I.). Second, lateral view photographs were taken to visually sub-divide the facial profile into convex, normal or concave. The photos were examined by a reference orthodontist and seven different healthcare professionals who work with children and also by a dental student. The inter- and intra-examiner consistencies were calculated by Kappa statistics. Three soft tissue landmarks of the facial profile, soft tissue Glabella (G`), Subnasale (Sn) and soft tissue Pogonion (Pg`) were digitally identified to analyze convexity of the face and the intra-examiner reproducibility of the reference orthodontist was determined by calculating intra-class correlation coefficients (ICCs). The third way to express the convexity of the face was to calculate the angle of facial convexity (G`-Sn-Pg`) and to group it into quintiles. For analysis the lowest quintile (≤164.2°) was set to represent the most convex facial profile. The prevalence of the SDB in children with the most convex profiles expressed with the lowest quintile of the angle G`-Sn-Pg` (≤164.2°) was almost 2-fold (14.5%) compared to those with normal profile (8.1%) (p = 0.084). The inter-examiner Kappa values between the reference orthodontist and the other examiners for visually assessing the facial profile with the photographs ranged from poor-to-moderate (0.000-0.579). The best Kappa values were achieved between the two orthodontists (0.579). The intra-examiner Kappa value of the reference orthodontist for assessing the profiles was 0.920, with the agreement of 93.3%. In the ICC and its 95% CI between the two digital measurements, the angles of convexity of the facial profile (G`-Sn-Pg`) of the reference orthodontist were 0.980 and 0.951-0.992. In addition to orthodontists, it would be advantageous if also other healthcare professionals could play a key role in identifying certain risk features for SDB. However, the present results indicate that, in order to recognize the morphological risk for SDB, one would need to be trained for the purpose and, as well, needs sufficient knowledge of the growth and development of the face.

  10. Functionalized patchy particles using colloidal lenses

    NASA Astrophysics Data System (ADS)

    Middleton, Christine

    2014-03-01

    Colloidal assembly had been limited by the isotropic, nonspecific nature of interactions between spherical colloidal particles. By giving particles patches functionalized with single stranded DNA, these interactions can be made both directional and specific. We create patchy particles by adding patches to spherical emulsion droplets using the depletion interaction. First we make polystyrene particles in the shape of contact lenses to be the patches. The lenses are functionalized with single stranded DNA on their convex side. Then we put the lenses on the surface of oil emulsion droplets using the depletion interaction, creating a patch (or multiple patches) on the surface of each emulsion droplet. The emulsion droplets can now interact with each other in a specific, directional way through DNA functionalized patches.

  11. Linear Controller Design: Limits of Performance

    DTIC Science & Technology

    1991-01-01

    where a sensor should be placed eg where an accelerometer is to be positioned on an aircraft or where a strain gauge is placed along a beam The...309 VIII CONTENTS 14 Special Algorithms for Convex Optimization 311 Notation and Problem Denitions...311 On Algorithms for Convex Optimization 312 CuttingPlane Algorithms

  12. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  13. New Insights into the adsorption of aurocyanide ion on activated carbon surface: electron microscopy analysis and computational studies using fullerene-like models.

    PubMed

    Yin, Chun-Yang; Ng, Man-Fai; Saunders, Martin; Goh, Bee-Min; Senanayake, Gamini; Sherwood, Ashley; Hampton, Marc

    2014-07-08

    Despite decades of concerted experimental studies dedicated to providing fundamental insights into the adsorption of aurocyanide ion, Au(CN)2(-), on activated carbon (AC) surface, such a mechanism is still poorly understood and remains a contentious issue. This adsorption process is an essential unit operation for extracting gold from ores using carbon-in-pulp (CIP) technology. We hereby attempt to shed more light on the subject by employing a range of transmission electron microscopy (TEM) associated techniques. Gold-based clusters on the AC surface are observed by Z-contrast scanning TEM imaging and energy-filtered TEM element mapping and are supported by X-ray microanalysis. Density functional theory (DFT) calculations are applied to investigate this adsorption process for the first time. Fullerene-like models incorporating convex, concave, or planar structure which mimic the eclectic porous structures on the AC surface are adopted. Pentagonal, hexagonal, and heptagonal arrangements of carbon rings are duly considered in the DFT study. By determining the favored adsorption sites in water environment, a general adsorption trend of Au(CN)2(-) adsorbed on AC surface is revealed whereby concave > convex ≈ planar. The results suggest a tendency for Au(CN)2(-) ion to adsorb on the carbon sheet defects or edges rather than on the basal plane. In addition, we show that the adsorption energy of Au(CN)2(-) is approximately 5 times higher than that of OH(-) in the alkaline environment (in negative ion form), compared to only about 2 times in acidic environment (in protonated form), indicating the Au extraction process is much favored in basic condition. The overall simulation results resolve certain ambiguities about the adsorption process for earlier studies. Our findings afford crucial information which could assist in enhancing our fundamental understanding of the CIP adsorption process.

  14. A continuous-wave, widely tunable, intra-cavity, singly resonant, magnesium-doped, periodically poled lithium niobate optical parametric oscillator

    NASA Astrophysics Data System (ADS)

    Li, Z. P.; Duan, Y. M.; Wu, K. R.; Zhang, G.; Zhu, H. Y.; Wang, X. L.; Chen, Y. H.; Xue, Z. Q.; Lin, Q.; Song, G. C.; Su, H.

    2013-05-01

    We report a continuous-wave (CW), intra-cavity singly resonant optical parametric oscillator (OPO), based on periodically poled MgO:LiNbO3 pumped by a diode-end-pumped CW Nd:YVO4 laser, and calculate the gain of optical parametric amplification as a function of pump beam waist (at 1064 nm) in the singly resonant OPO (SRO) cavity, to balance the mode-matching and the intensity for the higher gain of a signal wave in the operation of the SRO. In order to achieve maximum gain, we use a convex lens to limit the 1064 nm beam waist. In the experiment, a tunable signal output from 1492 to 1614 nm and an idler output from 3122 to 3709 nm are obtained. For an 808 nm pump power of 11.5 W, a maximum signal output power of up to 2.48 W at 1586 nm and an idler output power of 1.1 W at 3232 nm are achieved with a total optical-to-optical conversion efficiency of 31%.

  15. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  16. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    PubMed Central

    Nikazad, T; Davidi, R; Herman, G. T.

    2013-01-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911

  17. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    PubMed

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  18. Relating Lexicographic Smoothness and Directed Subdifferentiability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Kamil A.

    2016-06-03

    Lexicographic derivatives developed by Nesterov and directed subdifferentials developed by Baier, Farkhi, and Roshchina are both essentially nonconvex generalized derivatives for nonsmooth nonconvex functions and satisfy strict calculus rules and mean-value theorems. This article aims to clarify the relationship between the two generalized derivatives. In particular, for scalar-valued functions that are locally Lipschitz continuous, lexicographic smoothness and directed subdifferentiability are shown to be equivalent, along with the necessary optimality conditions corresponding to each. For such functions, the visualization of the directed subdifferential-the Rubinov subdifferential-is shown to include the lexicographic subdifferential, and is also shown to be included in its closedmore » convex hull. As a result, various implications of these results are discussed.« less

  19. A new corporoplasty based on stratified structure of tunica albuginea for the treatment of congenital penile curvature - long-term results.

    PubMed

    Perdzyński, Wojciech; Adamek, Marek

    2015-01-01

    The aim of the study was to report long-term results of treatment of patients with congenital penile curvature (CPC) with a new corporoplasty based on stratified structure of tunica albuginea, in which corporal bodies are not opened. From October 2006 to September 2013, the authors operated on 111 adult men with CPC. Ventral curvature was detected in 65 patients, lateral in 34, and dorsal in 12. Skin was incised longitudinally on convex surface of curvature. In ventral curvature, dorsal neuro-vascular bundles (NVBs) were separated from tunica albuginea and elliptical fragments of external (longitudinal) layer of tunica were excised. The tunica was sutured with absorbable sutures, which invaginated the internal (transversal) layer of tunica. In dorsal curvature, excisions were performed on both sides of the urethra, in lateral curvature - on the convex penile surface. Follow-up period was from 12 to 84 months. The penis was completely straight in 109 out of 111 patients. In 2 patients (1.8%) recurrent curvature of up to 20 degrees was detected. Redo surgery was done in one individual (0.9%) at patient's request. Glandular sensation loss or erectile dysfunction was not detected in any patient during the period of observation. A new operation for correction of CPC, which consists of excision of an elliptical fragment of the external layer of the tunica albuginea and plication of the internal layer gives good short and long-term results. Surgery done without penetrating the corpora cavernosa is minimally invasive, which diminishes the potential risk of complications, especially intra- and postoperative bleeding.

  20. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  1. Ternary alloy material prediction using genetic algorithm and cluster expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chong

    2015-12-01

    This thesis summarizes our study on the crystal structures prediction of Fe-V-Si system using genetic algorithm and cluster expansion. Our goal is to explore and look for new stable compounds. We started from the current ten known experimental phases, and calculated formation energies of those compounds using density functional theory (DFT) package, namely, VASP. The convex hull was generated based on the DFT calculations of the experimental known phases. Then we did random search on some metal rich (Fe and V) compositions and found that the lowest energy structures were body centered cube (bcc) underlying lattice, under which we didmore » our computational systematic searches using genetic algorithm and cluster expansion. Among hundreds of the searched compositions, thirteen were selected and DFT formation energies were obtained by VASP. The stability checking of those thirteen compounds was done in reference to the experimental convex hull. We found that the composition, 24-8-16, i.e., Fe 3VSi 2 is a new stable phase and it can be very inspiring to the future experiments.« less

  2. Algorithms for Maneuvering Spacecraft Around Small Bodies

    NASA Technical Reports Server (NTRS)

    Acikmese, A. Bechet; Bayard, David

    2006-01-01

    A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.

  3. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  4. PILA: Sub-Meter Localization Using CSI from Commodity Wi-Fi Devices

    PubMed Central

    Tian, Zengshan; Li, Ze; Zhou, Mu; Jin, Yue; Wu, Zipeng

    2016-01-01

    The aim of this paper is to present a new indoor localization approach by employing the Angle-of-arrival (AOA) and Received Signal Strength (RSS) measurements in Wi-Fi network. To achieve this goal, we first collect the Channel State Information (CSI) by using the commodity Wi-Fi devices with our designed three antennas to estimate the AOA of Wi-Fi signal. Second, we propose a direct path identification algorithm to obtain the direct signal path for the sake of reducing the interference of multipath effect on the AOA estimation. Third, we construct a new objective function to solve the localization problem by integrating the AOA and RSS information. Although the localization problem is non-convex, we use the Second-order Cone Programming (SOCP) relaxation approach to transform it into a convex problem. Finally, the effectiveness of our approach is verified based on the prototype implementation by using the commodity Wi-Fi devices. The experimental results show that our approach can achieve the median error 0.7 m in the actual indoor environment. PMID:27735879

  5. PILA: Sub-Meter Localization Using CSI from Commodity Wi-Fi Devices.

    PubMed

    Tian, Zengshan; Li, Ze; Zhou, Mu; Jin, Yue; Wu, Zipeng

    2016-10-10

    The aim of this paper is to present a new indoor localization approach by employing the Angle-of-arrival (AOA) and Received Signal Strength (RSS) measurements in Wi-Fi network. To achieve this goal, we first collect the Channel State Information (CSI) by using the commodity Wi-Fi devices with our designed three antennas to estimate the AOA of Wi-Fi signal. Second, we propose a direct path identification algorithm to obtain the direct signal path for the sake of reducing the interference of multipath effect on the AOA estimation. Third, we construct a new objective function to solve the localization problem by integrating the AOA and RSS information. Although the localization problem is non-convex, we use the Second-order Cone Programming (SOCP) relaxation approach to transform it into a convex problem. Finally, the effectiveness of our approach is verified based on the prototype implementation by using the commodity Wi-Fi devices. The experimental results show that our approach can achieve the median error 0.7 m in the actual indoor environment.

  6. A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.

    2008-08-01

    Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.

  7. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  8. Twisted trees and inconsistency of tree estimation when gaps are treated as missing data - The impact of model mis-specification in distance corrections.

    PubMed

    McTavish, Emily Jane; Steel, Mike; Holder, Mark T

    2015-12-01

    Statistically consistent estimation of phylogenetic trees or gene trees is possible if pairwise sequence dissimilarities can be converted to a set of distances that are proportional to the true evolutionary distances. Susko et al. (2004) reported some strikingly broad results about the forms of inconsistency in tree estimation that can arise if corrected distances are not proportional to the true distances. They showed that if the corrected distance is a concave function of the true distance, then inconsistency due to long branch attraction will occur. If these functions are convex, then two "long branch repulsion" trees will be preferred over the true tree - though these two incorrect trees are expected to be tied as the preferred true. Here we extend their results, and demonstrate the existence of a tree shape (which we refer to as a "twisted Farris-zone" tree) for which a single incorrect tree topology will be guaranteed to be preferred if the corrected distance function is convex. We also report that the standard practice of treating gaps in sequence alignments as missing data is sufficient to produce non-linear corrected distance functions if the substitution process is not independent of the insertion/deletion process. Taken together, these results imply inconsistent tree inference under mild conditions. For example, if some positions in a sequence are constrained to be free of substitutions and insertion/deletion events while the remaining sites evolve with independent substitutions and insertion/deletion events, then the distances obtained by treating gaps as missing data can support an incorrect tree topology even given an unlimited amount of data. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. 78 FR 68833 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice... Wallingford--CONVEX Services CL&P Electric Rate Schedule FERC No. 583 to be effective 1/1/2014. Filed Date: 11... Company submits CMEEC--CONVEX Services First Revised Rate Schedule FERC No. 576 to be effective 1/1/2014...

  10. Convexities move because they contain matter.

    PubMed

    Barenholtz, Elan

    2010-09-22

    Figure-ground assignment to a contour is a fundamental stage in visual processing. The current paper introduces a novel, highly general dynamic cue to figure-ground assignment: "Convex Motion." Across six experiments, subjects showed a strong preference to assign figure and ground to a dynamically deforming contour such that the moving contour segment was convex rather than concave. Experiments 1 and 2 established the preference across two different kinds of deformational motion. Additional experiments determined that this preference was not due to fixation (Experiment 3) or attentional mechanisms (Experiment 4). Experiment 5 found a similar, but reduced bias for rigid-as opposed to deformational-motion, and Experiment 6 demonstrated that the phenomenon depends on the global motion of the effected contour. An explanation of this phenomenon is presented on the basis of typical natural deformational motion, which tends to involve convex contour projections that contain regions consisting of physical "matter," as opposed to concave contour indentations that contain empty space. These results highlight the fundamental relationship between figure and ground, perceived shape, and the inferred physical properties of an object.

  11. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  12. Area-Preserving Mappings for the Visualization of Medical Structures

    DTIC Science & Technology

    2003-01-01

    Women’s Hospital, Harvard Medical School, Boston, MA 02115 haker @bwh.harvard.edu Abstract. In this note, we present a method for flattening anatomi- cal... Haker , and A. Tannenbaum pronounced when we wish to construct a flattened representation for a multi- branched surface. Here we take another approach...convex function w, i.e., ũ = ∇w. 280 L. Zhu, S. Haker , and A. Tannenbaum 3.2 Finding the Minimizer There have been many approaches for finding the

  13. Global solutions to the equation of thermoelasticity with fading memory

    NASA Astrophysics Data System (ADS)

    Okada, Mari; Kawashima, Shuichi

    2017-07-01

    We consider the initial-history value problem for the one-dimensional equation of thermoelasticity with fading memory. It is proved that if the data are smooth and small, then a unique smooth solution exists globally in time and converges to the constant equilibrium state as time goes to infinity. Our proof is based on a technical energy method which makes use of the strict convexity of the entropy function and the properties of strongly positive definite kernels.

  14. Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning.

    PubMed

    Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn

    2017-06-01

    To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.

  15. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  16. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  17. Detection of longitudinal ulcer using roughness value for computer aided diagnosis of Crohn's disease

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku

    2011-03-01

    The purpose of this paper is to present a new method to detect ulcers, which is one of the symptoms of Crohn's disease, from CT images. Crohn's disease is an inflammatory disease of the digestive tract. Crohn's disease commonly affects the small intestine. An optical or a capsule endoscope is used for small intestine examinations. However, these endoscopes cannot pass through intestinal stenosis parts in some cases. A CT image based diagnosis allows a physician to observe whole intestine even if intestinal stenosis exists. However, because of the complicated shape of the small and large intestines, understanding of shapes of the intestines and lesion positions are difficult in the CT image based diagnosis. Computer-aided diagnosis system for Crohn's disease having automated lesion detection is required for efficient diagnosis. We propose an automated method to detect ulcers from CT images. Longitudinal ulcers make rough surface of the small and large intestinal wall. The rough surface consists of combination of convex and concave parts on the intestinal wall. We detect convex and concave parts on the intestinal wall by a blob and an inverse-blob structure enhancement filters. A lot of convex and concave parts concentrate on roughed parts. We introduce a roughness value to differentiate convex and concave parts concentrated on the roughed parts from the other on the intestinal wall. The roughness value effectively reduces false positives of ulcer detection. Experimental results showed that the proposed method can detect convex and concave parts on the ulcers.

  18. “Soft that molds the hard:” Geometric morphometry of lateral atlantoaxial joints focusing on the role of cartilage in changing the contour of bony articular surfaces

    PubMed Central

    Prasad, Prashant Kumar; Salunke, Pravin; Sahni, Daisy; Kalra, Parveen

    2017-01-01

    Purpose: The existing literature on lateral atlantoaxial joints is predominantly on bony facets and is unable to explain various C1-2 motions observed. Geometric morphometry of facets would help us in understanding the role of cartilages in C1-2 biomechanics/kinematics. Objective: Anthropometric measurements (bone and cartilage) of the atlantoaxial joint and to assess the role of cartilages in joint biomechanics. Materials and Methods: The authors studied 10 cadaveric atlantoaxial lateral joints with the articular cartilage in situ and after removing it, using three-dimensional laser scanner. The data were compared using geometric morphometry with emphasis on surface contours of articulating surfaces. Results: The bony inferior articular facet of atlas is concave in both sagittal and coronal plane. The bony superior articular facet of axis is convex in sagittal plane and is concave (laterally) and convex medially in the coronal plane. The bony articulating surfaces were nonconcordant. The articular cartilages of both C1 and C2 are biconvex in both planes and are thicker than the concavities of bony articulating surfaces. Conclusion: The biconvex structure of cartilage converts the surface morphology of C1-C2 bony facets from concave on concavo-convex to convex on convex. This reduces the contact point making the six degrees of freedom of motion possible and also makes the joint gyroscopic. PMID:29403249

  19. Operational Resource Theory of Coherence.

    PubMed

    Winter, Andreas; Yang, Dong

    2016-03-25

    We establish an operational theory of coherence (or of superposition) in quantum systems, by focusing on the optimal rate of performance of certain tasks. Namely, we introduce the two basic concepts-"coherence distillation" and "coherence cost"-in the processing quantum states under so-called incoherent operations [Baumgratz, Cramer, and Plenio, Phys. Rev. Lett. 113, 140401 (2014)]. We, then, show that, in the asymptotic limit of many copies of a state, both are given by simple single-letter formulas: the distillable coherence is given by the relative entropy of coherence (in other words, we give the relative entropy of coherence its operational interpretation), and the coherence cost by the coherence of formation, which is an optimization over convex decompositions of the state. An immediate corollary is that there exists no bound coherent state in the sense that one would need to consume coherence to create the state, but no coherence could be distilled from it. Further, we demonstrate that the coherence theory is generically an irreversible theory by a simple criterion that completely characterizes all reversible states.

  20. Effect of dental arch convexity and type of archwire on frictional forces.

    PubMed

    Fourie, Zacharias; Ozcan, Mutlu; Sandham, Andrew

    2009-07-01

    Friction measurements in orthodontics are often derived from models by using brackets placed on flat models with various straight wires. Dental arches are convex in some areas. The objectives of this study were to compare the frictional forces generated in conventional flat and convex dental arch setups, and to evaluate the effect of different archwires on friction in both dental arch models. Two stainless steel models were designed and manufactured simulating flat and convex maxillary right buccal dental arches. Five stainless steel brackets from the maxillary incisor to the second premolar (slot size, 0.22 in, Victory, 3M Unitek, Monrovia, Calif) and a first molar tube were aligned and clamped on the metal model at equal distances of 6 mm. Four kinds of orthodontic wires were tested: (1) A. J. Wilcock Australian wire (0.016 in, G&H Wire, Hannover, Germany); and (2) 0.016 x 0.022 in, (3) 0.018 x 0.022 in, and (4) 0.019 x 0.025 in (3M Unitek GmbH, Seefeld, Germany). Gray elastomeric modules (Power O 110, Ormco, Glendora, Calif) were used for ligation. Friction tests were performed in the wet state with artificial saliva lubrication and by pulling 5 mm of the whole length of the archwire. Six measurements were made from each bracket-wire combination, and each test was performed with new combinations of materials for both arch setups (n = 48, 6 per group) in a universal testing machine (crosshead speed: 20 mm/min). Significant effects of arch model (P = 0.0000) and wire types (P = 0.0000) were found. The interaction term between the tested factors was not significant (P = 0.1581) (2-way ANOVA and Tukey test). Convex models resulted in significantly higher frictional forces (1015-1653 g) than flat models (680-1270 g) (P <0.05). In the flat model, significantly lower frictional forces were obtained with wire types 1 (679 g) and 3 (1010 g) than with types 2 (1146 g) and 4 (1270 g) (P <0.05). In the convex model, the lowest friction was obtained with wire types 1 (1015 g) and 3 (1142 g) (P >0.05). Type 1 wire tended to create the least overall friction in both flat and convex dental arch simulation models.

  1. Posterior convex release and interbody fusion for thoracic scoliosis: technical note.

    PubMed

    Mac-Thiong, Jean-Marc; Asghar, Jahangir; Parent, Stefan; Shufflebarger, Harry L; Samdani, Amer; Labelle, Hubert

    2016-09-01

    Anterior release and fusion is sometimes required in pediatric patients with thoracic scoliosis. Typically, a formal anterior approach is performed through open thoracotomy or video-assisted thoracoscopic surgery. The authors recently developed a technique for anterior release and fusion in thoracic scoliosis referred to as "posterior convex release and interbody fusion" (PCRIF). This technique is performed via the posterior-only approach typically used for posterior instrumentation and fusion and thus avoids a formal anterior approach. In this article the authors describe the technique and its use in 9 patients-to prevent a crankshaft phenomenon in 3 patients and to optimize the correction in 6 patients with a severe thoracic curve showing poor reducibility. After Ponte osteotomies at the levels requiring anterior release and fusion, intervertebral discs are approached from the convex side of the scoliosis. The annulus on the convex side of the scoliosis is incised from the lateral border of the pedicle to the lateral annulus while visualizing and protecting the pleura and spinal cord. The annulus in contact with the pleura and the anterior longitudinal ligament are removed before completing the discectomies and preparing the endplates. The PCRIF was performed at 3 levels in 4 patients and at 4 levels in 5 patients. Mean correction of the main thoracic curve, blood loss, and length of stay were 74.9%, 1290 ml, and 7.6 days, respectively. No neurological deficit, implant failure, or pseudarthrosis was observed at the last follow-up. Two patients had pleural effusion postoperatively, with 1 of them requiring placement of a chest tube. One patient had pulmonary edema secondary to fluid overload, while another patient underwent reoperation for a deep wound infection 3 weeks after the initial surgery. The technique is primarily indicated in skeletally immature patients with open triradiate cartilage and/or severe scoliosis. It can be particularly useful if there is significant vertebral rotation because access to the disc and anterior longitudinal ligament from the convex side will become safer. The PCRIF is an alternative to the formal anterior approach and does not require repositioning between the anterior and posterior stages, which prolongs the surgery and can be associated with an increased complication rate. The procedure can be done in the presence of preexisting pulmonary morbidity such as pleural adhesions and decreased pulmonary function because it does not require mobilization of the lung or single-lung ventilation. However, PCRIF can still be associated with pulmonary complications such as a pleural effusion, and care should be taken to avoid iatrogenic injury to the pleura. Placement of a deep wound drain at the level of the PCRIF is strongly recommended if postoperative bleeding is anticipated, to decrease the risk of pleural effusion.

  2. Feedback-Based Projected-Gradient Method for Real-Time Optimization of Aggregations of Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Bernstein, Andrey; Simonetto, Andrea

    This paper develops an online optimization method to maximize operational objectives of distribution-level distributed energy resources (DERs), while adjusting the aggregate power generated (or consumed) in response to services requested by grid operators. The design of the online algorithm is based on a projected-gradient method, suitably modified to accommodate appropriate measurements from the distribution network and the DERs. By virtue of this approach, the resultant algorithm can cope with inaccuracies in the representation of the AC power flows, it avoids pervasive metering to gather the state of noncontrollable resources, and it naturally lends itself to a distributed implementation. Optimality claimsmore » are established in terms of tracking of the solution of a well-posed time-varying convex optimization problem.« less

  3. Comparative analysis of Pareto surfaces in multi-criteria IMRT planning

    NASA Astrophysics Data System (ADS)

    Teichert, K.; Süss, P.; Serna, J. I.; Monz, M.; Küfer, K. H.; Thieke, C.

    2011-06-01

    In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.

  4. Growth Behavior, Geometrical Shape, and Second CMC of Micelles Formed by Cationic Gemini Esterquat Surfactants.

    PubMed

    Bergström, L Magnus; Tehrani-Bagha, Alireza; Nagy, Gergely

    2015-04-28

    Micelles formed by novel gemini esterquat surfactants have been investigated with small-angle neutron scattering (SANS). The growth behavior of the micelles is found to differ conspicuously depending on the length of the gemini surfactant spacer group. The gemini surfactant with a long spacer form rather small triaxial ellipsoidal tablet-shaped micelles that grow weakly with surfactant concentration in the entire range of measured concentrations. Geminis with a short spacer, on the other hand, form weakly growing oblates or tablets at low concentrations that start to grow much more strongly into polydisperse rodlike or wormlike micelles at higher concentrations. The latter behavior is consistent with the presence of a second CMC that marks the transition from the weakly to the strongly growing regime. It is found that the growth behavior in terms of aggregation number as a function of surfactant concentration always appear concave in weakly growing regimes, while switching to convex behavior in strongly growing regimes. As a result, we are able to determine the second CMC of the geminis with short spacer by means of suggesting a rather precise definition of it, located at the point of inflection of the growth curve that corresponds to the transition from concave to convex growth behavior. Our SANS results are rationalized by comparison with the recently developed general micelle model. In particular, this theory is able to explain and reproduce the characteristic appearances of the experimental growth curves, including the presence of a second CMC and the convex strongly growing regime beyond. By means of optimizing the agreement between predictions from the general micelle model and results from SANS experiments, we are able to determine the three bending elasticity constants spontaneous curvature, bending rigidity, and saddle-splay constant for each surfactant.

  5. Acute effects of spinal bracing on scapular kinematics in adolescent idiopathic scoliosis.

    PubMed

    Gur, Gozde; Turgut, Elif; Ayhan, Cigdem; Baltaci, Gul; Yakut, Yavuz

    2017-08-01

    Bracing is the most common nonsurgical treatment for adolescent idiopathic scoliosis. Spinal braces affect glenohumeral and scapulothoracic motion because they restrict trunk movements. However, the potential spinal-bracing effects on scapular kinematics are unknown. The present study aimed to investigate the acute effects of spinal bracing on scapular kinematics in adolescent idiopathic scoliosis. Scapular kinematics, including scapular internal/external rotation, posterior/anterior tilting, and downward/upward rotation during scapular plane elevation, were evaluated in 27 in-brace and out-of-brace adolescent idiopathic scoliosis patients with a three-dimensional electromagnetic tracking system. Data on the position and orientation of the scapula at 30°, 60°, 90°, and 120° humerothoracic elevation were used for statistical comparisons. The paired t-test was used to assess the differences between the mean values of in-brace and out-of-brace conditions. The in-brace condition showed significantly increased (P<0.05) scapular anterior tilting and decreased internal rotation in the resting position on the convex and concave sides; increased scapular downward rotation at 120° humerothoracic elevation on the convex side and at 30°, 60°, 90°, and 120° humerothoracic elevation on the concave side; increased scapular anterior tilt at 30°, 60°, 90°, and 120° humerothoracic elevation on the convex and concave sides; and decreased (P<0.05) maximal humerothoracic elevation of the arm. Spinal bracing affects scapular kinematics. Observed changes in scapular kinematics with brace may also affect upper extremity function for adolescents with idiopathic scoliosis. Therefore, clinicians should include assessments of the glenohumeral and scapulothoracic joints when designing rehabilitation protocols for patients with adolescent idiopathic scoliosis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Reflections From a Fresnel Lens

    ERIC Educational Resources Information Center

    Keeports, David

    2005-01-01

    Reflection of light by a convex Fresnel lens gives rise to two distinct images. A highly convex inverted real reflective image forms on the object side of the lens, while an upright virtual reflective image forms on the opposite side of the lens. I describe here a set of laser experiments performed upon a Fresnel lens. These experiments provide…

  7. Influence of crucible support and radial heating on the interface shape during vertical Bridgman GaAs growth

    NASA Astrophysics Data System (ADS)

    Koai, K.; Sonnenberg, K.; Wenzl, H.

    1994-03-01

    Crucible assembly in a vertical Bridgman furnace is investigated by a numerical finite element model with the aim to obtain convex interfaces during the growth of GaAs crystals. During the growth stage of the conic section, a new funnel shaped crucible support has been found more effective than the concentric cylinders design similar to that patented by AT & T in promoting interface convexity. For the growth stages of the constant diameter section, the furnace profile can be effectively modulated by localized radial heating at the gradient zone. With these two features being introduced into a new furnace design, it is shown numerically that enhancement of interface convexity can be achieved using the presently available crucible materials.

  8. [Objective accommodation parameters depending on accommodation task].

    PubMed

    Tarutta, E P; Tarasova, N A; Dolzhenko, O O

    2011-01-01

    62 myopic patients were examined to study objective accommodation parameters in different conditions of accommodation stimulus presenting (use of convex lenses). Objective accommodation response (OAR) was studied using binocular open-field autorefractometer in different conditions of stimulus presenting: complete myopia correction and adding of convex lenses with increasing power from +1.0 till +3.0 D. In 88,5% of children and adolescents showed significant decrease of OAR for 1,5-2,75D in 3.0D stimulus. Additional correction with convex lenses with increasing power leads to further reduce of accommodation response. As a result induced dynamic refraction in eye-lens system is lower than accommodation task. Only addition of +2,5D lense approximates it to required index of -3.0D.

  9. Laser backscattering analytical model of Doppler power spectra about rotating convex quadric bodies of revolution

    NASA Astrophysics Data System (ADS)

    Gong, YanJun; Wu, ZhenSen; Wang, MingJun; Cao, YunHua

    2010-01-01

    We propose an analytical model of Doppler power spectra in backscatter from arbitrary rough convex quadric bodies of revolution (whose lateral surface is a quadric) rotating around axes. In the global Cartesian coordinate system, the analytical model deduced is suitable for general convex quadric body of revolution. Based on this analytical model, the Doppler power spectra of cones, cylinders, paraboloids of revolution, and sphere-cones combination are proposed. We analyze numerically the influence of geometric parameters, aspect angle, wavelength and reflectance of rough surface of the objects on the broadened spectra because of the Doppler effect. This analytical solution may contribute to laser Doppler velocimetry, and remote sensing of ballistic missile that spin.

  10. Safe Onboard Guidance and Control Under Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James

    2011-01-01

    An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.

  11. Uniform magnetic fields in density-functional theory

    NASA Astrophysics Data System (ADS)

    Tellgren, Erik I.; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M.

    2018-01-01

    We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.

  12. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  13. Uniform magnetic fields in density-functional theory.

    PubMed

    Tellgren, Erik I; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M

    2018-01-14

    We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.

  14. Disturbance functions of the Goertler instability on an airfoil

    NASA Technical Reports Server (NTRS)

    Dagenhart, J. R.; Mangalam, S. M.

    1986-01-01

    Goertler vortices arise in boundary layers along concave surfaces due to centrifugal effects. This paper presents some results of an experiment conducted to study the development of these vortices on an airfoil with a pressure gradient in the concave region where an attached laminar boundary layer was insured with suction through a perforated panel. A sublimating chemical technique was used to visualize Goertler vortices and the velocity field was measured by laser velocimetry. Experimental disturbance functions are compared with those predicted by the linear stability theory. The trend of vortex amplification in the concave zone and damping in the following convex region is shown to essentially follow the theoretical predictions.

  15. Upper Bounds on the Expected Value of a Convex Function Using Gradient and Conjugate Function Information.

    DTIC Science & Technology

    1987-08-01

    of the absolute difference between the random variable and its mean.Gassmann and Ziemba 119861 provide a weaker bound that does not require...2.8284, and EX4tV) -12 EX’iX) = -42. Hence C = -2 -€t* i-4’]= I-- . 1213. £1 2 5 COMPARISONS OF BOUNDS IN IIn Gassmann and Ziemba 11986) extend an idea...solution of the foLLowing Linear program: (see Gassmann, Ziemba (1986),Theorem 1) m m m-GZ=max(XT(vi) I: z. 1=1,Z vo=x io (5.1hk i-l i=i i=1 I I where 0

  16. Compressive Sensing via Nonlocal Smoothed Rank Function

    PubMed Central

    Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683

  17. Using the Gilbert-Johnson-Keerthi Algorithm for Collision Detection in System Effectiveness Modeling

    DTIC Science & Technology

    2015-09-01

    product is vector whose i-th component is defined as (A×B)i = εi jkA jBk, (2) 2 where εi jk is the Levi - Civita symbol whose value is 1 if i jk is an...of GJK 9 3.1.1 Overview of GJK 9 3.1.2 Two Examples of GJK Operating 9 3.1.3 Termination Conditions 11 3.1.4 GJK Algorithm 13 3.2 Simplex Processing...band will snap around the outermost points, forming the convex hull. ...........5 Fig. 3 Two example triangles and their Minkowski Difference

  18. Work cost of thermal operations in quantum thermodynamics

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2014-07-01

    Adopting a resource theory framework of thermodynamics for quantum and nano systems pioneered by Janzing et al. (Int. J. Th. Phys. 39, 2717 (2000)), we formulate the cost in the useful work of transforming one resource state into another as a linear program of convex optimization. This approach is based on the characterization of thermal quasiorder given by Janzing et al. and later by Horodecki and Oppenheim (Nat. Comm. 4, 2059 (2013)). Both characterizations are related to an extended version of majorization studied by Ruch, Schranner and Seligman under the name mixing distance (J. Chem. Phys. 69, 386 (1978)).

  19. Automated Laser Cutting In Three Dimensions

    NASA Technical Reports Server (NTRS)

    Bird, Lisa T.; Yvanovich, Mark A.; Angell, Terry R.; Bishop, Patricia J.; Dai, Weimin; Dobbs, Robert D.; He, Mingli; Minardi, Antonio; Shelton, Bret A.

    1995-01-01

    Computer-controlled machine-tool system uses laser beam assisted by directed flow of air to cut refractory materials into complex three-dimensional shapes. Velocity, position, and angle of cut varied. In original application, materials in question were thermally insulating thick blankets and tiles used on space shuttle. System shapes tile to concave or convex contours and cuts beveled edges on blanket, without cutting through outer layer of quartz fabric part of blanket. For safety, system entirely enclosed to prevent escape of laser energy. No dust generated during cutting operation - all material vaporized; larger solid chips dislodged from workpiece easily removed later.

  20. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  1. Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2000-01-01

    Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.

  2. Description of plastic deformation of structural materials in triaxial loading

    NASA Astrophysics Data System (ADS)

    Lagzdins, A.; Zilaucs, A.

    2008-03-01

    A model of nonassociated plasticity is put forward for initially isotropic materials deforming with residual changes in volume under the action of triaxial normal stresses. The model is based on novel plastic loading and plastic potential functions, which define closed, convex, every where smooth surfaces in the 6D space of symmetric second-rank stress tensors. By way of example, the plastic deformation of a cylindrical concrete specimen wrapped with a CFRP tape and loaded in axial compression is described.

  3. Dimensionality Reduction in Big Data with Nonnegative Matrix Factorization

    DTIC Science & Technology

    2017-06-20

    appli- cations of data mining, signal processing , computer vision, bioinformatics, etc. Fun- damentally, NMF has two main purposes. First, it reduces...shape of the function becomes more spherical because ∂ 2g ∂y2i = 1, ∀i, and g(y) is convex. This part aims to make the post- processing parts more...maxStop = 0 for each thread of computation */; 3 /*Re-scaling variables*/; 4 Q = H√ diag(H)diag(H)T ; q = h√ diag(H) ; 5 /*Solving NQP: minimizingf(x

  4. Stochastic Network Interdiction

    DTIC Science & Technology

    1998-04-01

    UB(6, g) are monotonic in the sense that if 69 is a refinement of 6: wI~6! < wI~69! and w# ~6, g! > w# ~69, g!. See Hausch and Ziemba (1983) for...of Vulnerability—The Integrity Family. Networks 24, 207–213. HAUSCH, D. B., AND W. T. ZIEMBA . 1983. Bounds on the Value of Information in Uncertain...Decision Problems II. Stochastics 10, 181–217. HUANG, C. C., W. T. ZIEMBA , AND A. BEN-TAL. 1977. Bounds on the Expectation of a Convex Function of a

  5. A study on ?-dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Zhu, Quanxin; Pavithra, S.; Gunasekaran, N.

    2018-03-01

    This study examines the problem of dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays. This paper proposes a complex dynamical network consisting of N linearly and diffusively coupled identical reaction-diffusion neural networks. By constructing a suitable Lyapunov-Krasovskii functional (LKF), utilisation of Jensen's inequality and reciprocally convex combination (RCC) approach, strictly ?-dissipative conditions of the addressed systems are derived. Finally, a numerical example is given to show the effectiveness of the theoretical results.

  6. Robust model predictive control for satellite formation keeping with eccentricity/inclination vector separation

    NASA Astrophysics Data System (ADS)

    Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong

    2018-05-01

    This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.

  7. Corrigendum to “The Schwarz alternating method in solid mechanics” [Comput. Methods Appl. Mech. Engrg. 319 (2017) 19–51

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mota, Alejandro; Tezaur, Irina; Alleman, Coleman

    This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.

  8. The nucleolus is well-posed

    NASA Astrophysics Data System (ADS)

    Fragnelli, Vito; Patrone, Fioravante; Torre, Anna

    2006-02-01

    The lexicographic order is not representable by a real-valued function, contrary to many other orders or preorders. So, standard tools and results for well-posed minimum problems cannot be used. We prove that under suitable hypotheses it is however possible to guarantee the well-posedness of a lexicographic minimum over a compact or convex set. This result allows us to prove that some game theoretical solution concepts, based on lexicographic order are well-posed: in particular, this is true for the nucleolus.

  9. Fractal Image Filters for Specialized Image Recognition Tasks

    DTIC Science & Technology

    2010-02-11

    butter sets of fractal geometers, such as Sierpinski triangles, twin- dragons , Koch curves, Cantor sets, fractal ferns, and so on. The geometries and...K is a centrally symmetric convex body in R m, then the function ‖x‖K defines a norm on R m. Moreover, the set K is the unit ball with respect to the...positive numbers r and R such that K contains a ball of radius r and is contained in a ball of radius R, the following proposition is clear

  10. Axial jet mixing of ethanol in cylindrical containers during weightlessness

    NASA Technical Reports Server (NTRS)

    Aydelott, J. C.

    1979-01-01

    An experimental program was conducted to examine the liquid flow patterns that result from the axial jet mixing of ethanol in 10-centimeter-diameter cylindrical tanks in weightlessness. A convex hemispherically ended tank and two Centaur liquid-hydrogen-tank models were used for the study. Four distinct liquid flow patterns were observed to be a function of the tank geometry, the liquid-jet velocity, the volume of liquid in the tank, and the location of the tube from which the liquid jet exited.

  11. Corrigendum to “The Schwarz alternating method in solid mechanics” [Comput. Methods Appl. Mech. Engrg. 319 (2017) 19–51

    DOE PAGES

    Mota, Alejandro; Tezaur, Irina; Alleman, Coleman

    2017-12-06

    This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.

  12. Graph Design via Convex Optimization: Online and Distributed Perspectives

    NASA Astrophysics Data System (ADS)

    Meng, De

    Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.

  13. Aim for the Suprasternal Notch: Technical Note to Avoid Bowstringing after Deep Brain Stimulation.

    PubMed

    Akram, Harith; Limousin, Patricia; Hyam, Jonathan; Hariz, Marwan I; Zrinzo, Ludvic

    2015-01-01

    Bowstringing may occur when excessive fibrosis develops around extension cables in the neck after deep brain stimulation (DBS) surgery. Though the occurrence of this phenomenon is rare, we have noted that it tends to cause maximal discomfort when the cables cross superficially over the convexity of the clavicle. We hypothesise that bowstringing may be avoided by directing the extension cables towards the suprasternal notch. When connecting DBS leads to an infraclavicular pectoral implantable pulse generator (IPG), tunnelling is directed towards the suprasternal notch, before being directed laterally towards the IPG pocket. In previously operated patients with established fibrosis, the fibrous tunnel is opened and excised as far cranially as possible, allowing medial rerouting of cables. Using this approach, we reviewed our series of patients who underwent DBS surgery over 10 years. In 429 patients, 7 patients (2%) with cables tunnelled over the convexity of the clavicle complaining of bowstringing underwent cable exploration and rerouting. This eliminated bowstringing and provided better cosmetic results. When the cable trajectory was initially directed towards the suprasternal notch, no bowstringing was observed. The tunnelling trajectory appears to influence postoperative incidence of fibrosis associated with DBS cables. Modifying the surgical technique may reduce the incidence of this troublesome adverse event.

  14. Multi-normed spaces based on non-discrete measures and their tensor products

    NASA Astrophysics Data System (ADS)

    Helemskii, A. Ya.

    2018-04-01

    Lambert discovered a new type of structures situated, in a sense, between normed spaces and abstract operator spaces. His definition was based on the notion of amplifying a normed space by means of the spaces \\ell_2^n. Later, several mathematicians studied more general structures (`p-multi- normed spaces') introduced by means of the spaces \\ell_p^n, 1≤ p≤∞. We pass from \\ell_p to L_p(X,μ) with an arbitrary measure. This becomes possible in the framework of the non- coordinate approach to the notion of amplification. In the case of a discrete counting measure, this approach is equivalent to the approach in the papers mentioned. Two categories arise. One consists of amplifications by means of an arbitrary normed space, and the other consists of p-convex amplifications by means of L_p(X,μ). Each of them has its own tensor product of objects (the existence of each product is proved by a separate explicit construction). As a final result, we show that the `p-convex' tensor product has an especially transparent form for the minimal L_p-amplifications of L_q-spaces, where q is conjugate to p. Namely, tensoring L_q(Y,ν) and L_q(Z,λ), we obtain L_q(Y× Z, ν×λ).

  15. Slope gradient and shape effects on soil profiles in the northern mountainous forests of Iran

    NASA Astrophysics Data System (ADS)

    Fazlollahi Mohammadi, M.; Jalali, S. G. H.; Kooch, Y.; Said-Pullicino, D.

    2016-12-01

    In order to evaluate the variability of the soil profiles at two shapes (concave and convex) and five positions (summit, shoulder, back slope, footslope and toeslope) of a slope, a study of a virgin area was made in a Beech stand of mountain forests, northern Iran. Across the slope positions, the soil profiles demonstrated significant changes due to topography for two shape slopes. The solum depth of the convex slope was higher than the concave one in all five positions, and it decreased from the summit to shoulder and increased from the mid to lower slope positions for both convex and concave slopes. The thin solum at the upper positions and concave slope demonstrated that pedogenetic development is least at upper slope positions and concave slope where leaching and biomass productivity are less than at lower slopes and concave slope. A large decrease in the thickness of O and A horizons from the summit to back slope was noted for both concave and convex slopes, but it increased from back slope toward down slope for both of them. The average thickness of B horizons increased from summit to down slopes in the case of the concave slope, but in the case of convex slope it decreased from summit to shoulder and afterwards it increased to the down slope. The thicknesses of the different horizons varied in part in the different positions and shape slopes because they had different plant species cover and soil features, which were related to topography.

  16. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  17. Investigation of heat exchangers for energy conversion systems of megawatt-class space power plants

    NASA Astrophysics Data System (ADS)

    Ilmov, D. N.; Mamontov, Yu. N.; Skorohodov, A. S.; Smolyarov, V. A.; Filatov, N. I.

    2016-01-01

    The specifics of operation (high temperatures in excess of 1000 K and large pressure drops of several megapascals between "hot" and "cold" coolant paths) of heat exchangers in the closed circuit of a gasturbine power converter operating in accordance with the Brayton cycle with internal heat recovery are analyzed in the context of construction of space propulsion systems. The design of a heat-exchange matrix made from doubly convex stamped plates with a specific surface relief is proposed. This design offers the opportunity to construct heat exchangers with the required parameters (strength, rigidity, weight, and dimensions) for the given operating conditions. The diagram of the working area of a test bench is presented, and the experimental techniques are outlined. The results of experimental studies of heat exchange and flow regimes in the models of heat exchangers with matrices containing 50 and 300 plates for two pairs of coolants (gas-gas and gas-liquid) are detailed. A criterion equation for the Nusselt number in the range of Reynolds numbers from 200 to 20 000 is proposed. The coefficients of hydraulic resistance for each coolant path are determined as functions of the Reynolds number. It is noted that the pressure in the water path in the "gas-liquid" series of experiments remained almost constant. This suggests that no well-developed processes of vaporization occurred within this heat-exchange matrix design even when the temperature drop between gas and water was as large as tens or hundreds of degrees. The obtained results allow one to design flight heat exchangers for various space power plants.

  18. Testing and inspecting lens by holographic means

    DOEpatents

    Hildebrand, Bernard P.

    1976-01-01

    Processes for the accurate, rapid and inexpensive testing and inspecting of oncave and convex lens surfaces through holographic means requiring no beamsplitters, mirrors or overpower optics, and wherein a hologram formed in accordance with one aspect of the invention contains the entire interferometer and serves as both a master and illuminating source for both concave and said convex surfaces to be so tested.

  19. Some Tours Are More Equal than Others: The Convex-Hull Model Revisited with Lessons for Testing Models of the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    Tak, Susanne; Plaisier, Marco; van Rooij, Iris

    2008-01-01

    To explain human performance on the "Traveling Salesperson" problem (TSP), MacGregor, Ormerod, and Chronicle (2000) proposed that humans construct solutions according to the steps described by their convex-hull algorithm. Focusing on tour length as the dependent variable, and using only random or semirandom point sets, the authors…

  20. Beam aperture modifier design with acoustic metasurfaces

    NASA Astrophysics Data System (ADS)

    Tang, Weipeng; Ren, Chunyu

    2017-10-01

    In this paper, we present a design concept of acoustic beam aperture modifier using two metasurface-based planar lenses. By appropriately designing the phase gradient profile along the metasurface, we obtain a class of acoustic convex lenses and concave lenses, which can focus the incoming plane waves and collimate the converging waves, respectively. On the basis of the high converging and diverging capability of these lenses, two kinds of lens combination scheme, including the convex-concave type and convex-convex type, are proposed to tune up the incoming beam aperture as needed. To be specific, the aperture of the acoustic beam can be shrunk or expanded through adjusting the phase gradient of the pair of lenses and the spacing between them. These lenses and the corresponding aperture modifiers are constructed by the stacking ultrathin labyrinthine structures, which are obtained by the geometry optimization procedure and exhibit high transmission coefficient and a full range of phase shift. The simulation results demonstrate the effectiveness of our proposed beam aperture modifiers. Due to the flexibility in aperture controlling and the simplicity in fabrication, the proposed modifiers have promising potential in applications, such as acoustic imaging, nondestructive evaluation, and communication.

Top