Sample records for faster ordered-subset convex

  1. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  2. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  3. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  4. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  5. POLYNOMIAL AND RATIONAL APPROXIMATION OF FUNCTIONS OF SEVERAL VARIABLES WITH CONVEX DERIVATIVES IN THE L_p-METRIC (0 < p\\leqslant\\infty)

    NASA Astrophysics Data System (ADS)

    Khatamov, A.

    1995-02-01

    Let \\operatorname{Conv}_n^{(l)}(\\mathscr{G}) be the set of all functions f such that for every n-dimensional unit vector \\mathbf{e} the lth derivative in the direction of \\mathbf{e}, D^{(l)}(\\mathbf{e})f, is continuous on a convex bounded domain \\mathscr{G}\\subset\\mathbf{R}^n ( n \\geqslant 2) and convex (upwards or downwards) on the nonempty intersection of every line L\\subset\\mathbf{R}^n with the domain \\mathscr{G}, and let M^{(l)}(f,\\mathscr{G}):= \\sup \\bigl\\{\\bigl\\Vert D^{(l)}(\\mathbf{e})f\\bigr\\Ve......})}\\colon\\mathbf{e}\\in\\mathbf{R}^n,\\,\\,\\Vert\\mathbf{e}\\Vert=1\\bigr\\} < \\infty. Sharp, in the sense of order of smallness, estimates of best simultaneous polynomial approximations of the functions f\\in\\operatorname{Conv}_n^{(l)}(\\mathscr{G}) for which D^{(l)}(\\mathbf{e})f\\in\\operatorname{Lip}_K 1 for every \\mathbf{e}, and their derivatives in the metrics of L_p(\\mathscr{G}) (0 < p\\leqslant\\infty) are obtained. It is proved that the corresponding parts of these estimates are preserved for best rational approximations, on any n-dimensional parallelepiped Q, of functions f\\in\\operatorname{Conv}_n^{(l)}(Q) in the metrics of L_p(Q) (0 < p < \\infty) and it is shown that they are sharp in the sense of order of smallness for 0 < p\\leqslant1.

  6. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    DOE PAGES

    Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna

    2016-09-13

    Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less

  7. Computing convex quadrangulations☆

    PubMed Central

    Schiffer, T.; Aurenhammer, F.; Demuth, M.

    2012-01-01

    We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set. PMID:22389540

  8. ON THE STRUCTURE OF \\mathcal{H}_{n - 1}-ALMOST EVERYWHERE CONVEX HYPERSURFACES IN \\mathbf{R}^{n + 1}

    NASA Astrophysics Data System (ADS)

    Dmitriev, V. G.

    1982-04-01

    It is proved that a hypersurface f imbedded in \\mathbf{R}^{n + 1}, n \\geq 2, which is locally convex at all points except for a closed set E with (n - 1)-dimensional Hausdorff measure \\mathcal{K}_{n - 1}(E) = 0, and strictly convex near E is in fact locally convex everywhere. The author also gives various corollaries. In particular, let M be a complete two-dimensional Riemannian manifold of nonnegative curvature K and E \\subset M a closed subset for which \\mathcal{K}_1(E) = 0. Assume further that there exists a neighborhood U \\supset E such that K(x) > 0 for x \\in U \\setminus E, f \\colon M \\to \\mathbf{R}^3 is such that f\\big\\vert _{U \\setminus E} is an imbedding, and f\\big\\vert _{M \\setminus E} \\in C^{1, \\alpha}, \\alpha > 2/3. Then f(M) is a complete convex surface in \\mathbf{R}^3. This result is an generalization of results in the paper reviewed in MR 51 # 11374.Bibliography: 19 titles.

  9. Time-frequency filtering and synthesis from convex projections

    NASA Astrophysics Data System (ADS)

    White, Langford B.

    1990-11-01

    This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.

  10. Finite-time containment control of perturbed multi-agent systems based on sliding-mode control

    NASA Astrophysics Data System (ADS)

    Yu, Di; Ji, Xiang Yang

    2018-01-01

    Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.

  11. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  12. Renorming c0 and closed, bounded, convex sets with fixed point property for affine nonexpansive mappings

    NASA Astrophysics Data System (ADS)

    Nezir, Veysel; Mustafa, Nizami

    2017-04-01

    In 2008, P.K. Lin provided the first example of a nonreflexive space that can be renormed to have fixed point property for nonexpansive mappings. This space was the Banach space of absolutely summable sequences l1 and researchers aim to generalize this to c0, Banach space of null sequences. Before P.K. Lin's intriguing result, in 1979, Goebel and Kuczumow showed that there is a large class of non-weak* compact closed, bounded, convex subsets of l1 with fixed point property for nonexpansive mappings. Then, P.K. Lin inspired by Goebel and Kuczumow's ideas to give his result. Similarly to P.K. Lin's study, Hernández-Linares worked on L1 and in his Ph.D. thesis, supervisored under Maria Japón, showed that L1 can be renormed to have fixed point property for affine nonexpansive mappings. Then, related questions for c0 have been considered by researchers. Recently, Nezir constructed several equivalent norms on c0 and showed that there are non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings. In this study, we construct a family of equivalent norms containing those developed by Nezir as well and show that there exists a large class of non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings.

  13. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less

  14. Constrained Optimal Transport

    NASA Astrophysics Data System (ADS)

    Ekren, Ibrahim; Soner, H. Mete

    2018-03-01

    The classical duality theory of Kantorovich (C R (Doklady) Acad Sci URSS (NS) 37:199-201, 1942) and Kellerer (Z Wahrsch Verw Gebiete 67(4):399-432, 1984) for classical optimal transport is generalized to an abstract framework and a characterization of the dual elements is provided. This abstract generalization is set in a Banach lattice X with an order unit. The problem is given as the supremum over a convex subset of the positive unit sphere of the topological dual of X and the dual problem is defined on the bi-dual of X. These results are then applied to several extensions of the classical optimal transport.

  15. A linearization of quantum channels

    NASA Astrophysics Data System (ADS)

    Crowder, Tanner

    2015-06-01

    Because the quantum channels form a compact, convex set, we can express any quantum channel as a convex combination of extremal channels. We give a Euclidean representation for the channels whose inverses are also valid channels; these are a subset of the extreme points. They form a compact, connected Lie group, and we calculate its Lie algebra. Lastly, we calculate a maximal torus for the group and provide a constructive approach to decomposing any invertible channel into a product of elementary channels.

  16. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  17. Pricing of Water Resources With Depletable Externality: The Effects of Pollution Charges

    NASA Astrophysics Data System (ADS)

    Kitabatake, Yoshifusa

    1990-04-01

    With an abstraction of a real-world situation, the paper views water resources as a depletable capital asset which yields a stream of services such as water supply and the assimilation of pollution discharge. The concept of the concave or convex water resource depletion function is then introduced and applied to a general two-sector, three-factor model. The main theoretical contribution is to prove that when the water resource depletion function is a concave rather than a convex function of pollution, it is more likely that gross regional income will increase with a higher pollution charge policy. The concavity of the function is meant to imply that with an increase in pollution released, the ability of supplying water at a certain minimum quality level diminishes faster and faster. A numerical example is also provided.

  18. A search asymmetry reversed by figure-ground assignment.

    PubMed

    Humphreys, G W; Müller, H

    2000-05-01

    We report evidence demonstrating that a search asymmetry favoring concave over convex targets can be reversed by altering the figure-ground assignment of edges in shapes. Visual search for a concave target among convex distractors is faster than search for a convex target among concave distractors (a search asymmetry). By using shapes with ambiguous local figure-ground relations, we demonstrated that search can be efficient (with search slopes around 10 ms/item) or inefficient (with search slopes around 30-40 ms/item) with the same stimuli, depending on whether edges are assigned to concave or convex "figures." This assignment process can operate in a top-down manner, according to the task set. The results suggest that attention is allocated to spatial regions following the computation of figure-ground relations in parallel across the elements present. This computation can also be modulated by top-down processes.

  19. CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.

    PubMed

    Mei, Gang

    2016-01-01

    This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.

  20. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  1. A 'range test' for determining scatterers with unknown physical properties

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Sylvester, John; Kusiak, Steven

    2003-06-01

    We describe a new scheme for determining the convex scattering support of an unknown scatterer when the physical properties of the scatterers are not known. The convex scattering support is a subset of the scatterer and provides information about its location and estimates for its shape. For convex polygonal scatterers the scattering support coincides with the scatterer and we obtain full shape reconstructions. The method will be formulated for the reconstruction of the scatterers from the far field pattern for one or a few incident waves. The method is non-iterative in nature and belongs to the type of recently derived generalized sampling schemes such as the 'no response test' of Luke-Potthast. The range test operates by testing whether it is possible to analytically continue a far field to the exterior of any test domain Omegatest. By intersecting the convex hulls of various test domains we can produce a minimal convex set, the convex scattering support of which must be contained in the convex hull of the support of any scatterer which produces that far field. The convex scattering support is calculated by testing the range of special integral operators for a sampling set of test domains. The numerical results can be used as an approximation for the support of the unknown scatterer. We prove convergence and regularity of the scheme and show numerical examples for sound-soft, sound-hard and medium scatterers. We can apply the range test to non-convex scatterers as well. We can conclude that an Omegatest which passes the range test has a non-empty intersection with the infinity-support (the complement of the unbounded component of the complement of the support) of the true scatterer, but cannot find a minimal set which must be contained therein.

  2. Piecewise convexity of artificial neural networks.

    PubMed

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Probabilistic Guidance of Swarms using Sequential Convex Programming

    DTIC Science & Technology

    2014-01-01

    quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using

  4. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    PubMed

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  5. Efficient Orchestration of Data Centers Via Comprehensive and Application Aware Trade Off Exploration

    DTIC Science & Technology

    2016-12-01

    proposes to save power by concentrating traffic over a small subset of links. data center architecture [12], as depicted in Figure 1.1. The fat-tree... architecture is a physical network topology commonly used in data networks representing a hier- archical multi-rooted tree consisting of four levels...milliseconds) is an order of magnitude faster than the GASO variants (tens of seconds). 3.4.3 LAW for Architectures of Different Dimensions In this section

  6. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.

  7. The effects of a convex rear-view mirror on ocular accommodative responses.

    PubMed

    Nagata, Tatsuo; Iwasaki, Tsuneto; Kondo, Hiroyuki; Tawara, Akihiko

    2013-11-01

    Convex mirrors are universally used as rear-view mirrors in automobiles. However, the ocular accommodative responses during the use of these mirrors have not yet been examined. This study investigated the effects of a convex mirror on the ocular accommodative systems. Seven young adults with normal visual functions were ordered to binocularly watch an object in a convex or plane mirror. The accommodative responses were measured with an infrared optometer. The average of the accommodation of all subjects while viewing the object in the convex mirror were significantly nearer than in the plane mirror, although all subjects perceived the position of the object in the convex mirror as being farther away. Moreover, the fluctuations of accommodation were significantly larger for the convex mirror. The convex mirror caused the 'false recognition of distance', which induced the large accommodative fluctuations and blurred vision. Manufactures should consider the ocular accommodative responses as a new indicator for increasing automotive safety. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. A Lie-theoretic Description of the Solution Space of the tt*-Toda Equations

    NASA Astrophysics Data System (ADS)

    Guest, Martin A.; Ho, Nan-Kuo

    2017-12-01

    We give a Lie-theoretic explanation for the convex polytope which parametrizes the globally smooth solutions of the topological-antitopological fusion equations of Toda type (tt ∗-Toda equations) which were introduced by Cecotti and Vafa. It is known from Guest and Lin (J. Reine Angew. Math. 689, 1-32 2014) Guest et al. (It. Math. Res. Notices 2015, 11745-11784 2015) and Mochizuki (2013, 2014) that these solutions can be parametrized by monodromy data of a certain flat S L n+ 1 ℝ-connection. Using Boalch's Lie-theoretic description of Stokes data, and Steinberg's description of regular conjugacy classes of a linear algebraic group, we express this monodromy data as a convex subset of a Weyl alcove of S U n+ 1.

  9. The baker’s map with a convex hole

    NASA Astrophysics Data System (ADS)

    Clark, Lyndsey; Hare, Kevin G.; Sidorov, Nikita

    2018-07-01

    We consider the baker’s map B on the unit square X and an open convex set which we regard as a hole. The survivor set is defined as the set of all points in X whose B-trajectories are disjoint from H. The main purpose of this paper is to study holes H for which (dimension traps) as well as those for which any periodic trajectory of B intersects (cycle traps). We show that any H which lies in the interior of X is not a dimension trap. This means that, unlike the doubling map and other one-dimensional examples, we can have for H whose Lebesgue measure is arbitrarily close to one. Also, we describe holes which are dimension or cycle traps, critical in the sense that if we consider a strictly convex subset, then the corresponding property in question no longer holds. We also determine such that for all convex H whose Lebesgue measure is less than δ. This paper may be seen as a first extension of our work begun in Clark (2016 Discrete Continuous Dyn. Syst. A 6 1249–69 Clark 2016 PhD Dissertation The University of Manchester; Glendinning and Sidorov 2015 Ergod. Theor. Dynam. Syst. 35 1208–28 Hare and Sidorov 2014 Mon.hefte Math. 175 347–65 Sidorov 2014 Acta Math. Hung. 143 298–312) to higher dimensions.

  10. Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.

    2011-01-01

    An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.

  11. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  12. On an open question of V. Colao and G. Marino presented in the paper "Krasnoselskii-Mann method for non-self mappings".

    PubMed

    Guo, Meifang; Li, Xia; Su, Yongfu

    2016-01-01

    Let H be a Hilbert space and let C be a closed convex nonempty subset of H and [Formula: see text] a non-self nonexpansive mapping. A map [Formula: see text] defined by [Formula: see text]. Then, for a fixed [Formula: see text] and for [Formula: see text], Krasnoselskii-Mann algorithm is defined by [Formula: see text] where [Formula: see text]. Recently, Colao and Marino (Fixed Point Theory Appl 2015:39, 2015) have proved both weak and strong convergence theorems when C is a strictly convex set and T is an inward mapping. Meanwhile, they proposed a open question for a countable family of non-self nonexpansive mappings. In this article, authors will give an answer and will prove the further generalized results with the examples to support them.

  13. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  14. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    NASA Astrophysics Data System (ADS)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  15. Convex central configurations for the n-body problem

    NASA Astrophysics Data System (ADS)

    Xia, Zhihong

    We give a simple proof of a classical result of MacMillan and Bartky (Trans. Amer. Math. Soc. 34 (1932) 838) which states that, for any four positive masses and any assigned order, there is a convex planar central configuration. Moreover, we show that the central configurations we find correspond to local minima of the potential function with fixed moment of inertia. This allows us to show that there are at least six local minimum central configurations for the planar four-body problem. We also show that for any assigned order of five masses, there is at least one convex spatial central configuration of local minimum type. Our method also applies to some other cases.

  16. Rapid figure-ground responses to stereograms reveal an advantage for a convex foreground.

    PubMed

    Bertamini, Marco; Lawson, Rebecca

    2008-01-01

    Convexity has long been recognised as a factor that affects figure - ground segmentation, even when pitted against other factors such as symmetry [Kanizsa and Gerbino, 1976 Art and Artefacts Ed.M Henle (New York: Springer) pp 25-32]. It is accepted in the literature that the difference between concave and convex contours is important for the visual system, and that there is a prior expectation favouring convexities as figure. We used bipartite stimuli and a simple task in which observers had to report whether the foreground was on the left or the right. We report objective evidence that supports the idea that convexity affects figure-ground assignment, even though our stimuli were not pictorial in that depth order was specified unambiguously by binocular disparity.

  17. Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.

    PubMed

    Khoo, Y; Singer, A; Cowburn, D

    2017-07-01

    We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.

  18. APPROXIMATING SYMMETRIC POSITIVE SEMIDEFINITE TENSORS OF EVEN ORDER*

    PubMed Central

    BARMPOUTIS, ANGELOS; JEFFREY, HO; VEMURI, BABA C.

    2012-01-01

    Tensors of various orders can be used for modeling physical quantities such as strain and diffusion as well as curvature and other quantities of geometric origin. Depending on the physical properties of the modeled quantity, the estimated tensors are often required to satisfy the positivity constraint, which can be satisfied only with tensors of even order. Although the space P02m of 2mth-order symmetric positive semi-definite tensors is known to be a convex cone, enforcing positivity constraint directly on P02m is usually not straightforward computationally because there is no known analytic description of P02m for m > 1. In this paper, we propose a novel approach for enforcing the positivity constraint on even-order tensors by approximating the cone P02m for the cases 0 < m < 3, and presenting an explicit characterization of the approximation Σ2m ⊂ Ω2m for m ≥ 1, using the subset Ω2m⊂P02m of semi-definite tensors that can be written as a sum of squares of tensors of order m. Furthermore, we show that this approximation leads to a non-negative linear least-squares (NNLS) optimization problem with the complexity that equals the number of generators in Σ2m. Finally, we experimentally validate the proposed approach and we present an application for computing 2mth-order diffusion tensors from Diffusion Weighted Magnetic Resonance Images. PMID:23285313

  19. An interface reconstruction method based on an analytical formula for 3D arbitrary convex cells

    DOE PAGES

    Diot, Steven; François, Marianne M.

    2015-10-22

    In this study, we are interested in an interface reconstruction method for 3D arbitrary convex cells that could be used in multi-material flow simulations for instance. We assume that the interface is represented by a plane whose normal vector is known and we focus on the volume-matching step that consists in finding the plane constant so that it splits the cell according to a given volume fraction. We follow the same approach as in the recent authors' publication for 2D arbitrary convex cells in planar and axisymmetrical geometries, namely we derive an analytical formula for the volume of the specificmore » prismatoids obtained when decomposing the cell using the planes that are parallel to the interface and passing through all the cell nodes. This formula is used to bracket the interface plane constant such that the volume-matching problem is rewritten in a single prismatoid in which the same formula is used to find the final solution. Finally, the proposed method is tested against an important number of reproducible configurations and shown to be at least five times faster.« less

  20. Fourth class of convex equilateral polyhedron with polyhedral symmetry related to fullerenes and viruses

    PubMed Central

    Schein, Stan; Gayed, James Maurice

    2014-01-01

    The three known classes of convex polyhedron with equal edge lengths and polyhedral symmetry––tetrahedral, octahedral, and icosahedral––are the 5 Platonic polyhedra, the 13 Archimedean polyhedra––including the truncated icosahedron or soccer ball––and the 2 rhombic polyhedra reported by Johannes Kepler in 1611. (Some carbon fullerenes, inorganic cages, icosahedral viruses, geodesic structures, and protein complexes resemble these fundamental shapes.) Here we add a fourth class, “Goldberg polyhedra,” which are also convex and equilateral. We begin by decorating each of the triangular facets of a tetrahedron, an octahedron, or an icosahedron with the T vertices and connecting edges of a “Goldberg triangle.” We obtain the unique set of internal angles in each planar face of each polyhedron by solving a system of n equations and n variables, where the equations set the dihedral angle discrepancy about different types of edge to zero, and the variables are a subset of the internal angles in 6gons. Like the faces in Kepler’s rhombic polyhedra, the 6gon faces in Goldberg polyhedra are equilateral and planar but not equiangular. We show that there is just a single tetrahedral Goldberg polyhedron, a single octahedral one, and a systematic, countable infinity of icosahedral ones, one for each Goldberg triangle. Unlike carbon fullerenes and faceted viruses, the icosahedral Goldberg polyhedra are nearly spherical. The reasoning and techniques presented here will enable discovery of still more classes of convex equilateral polyhedra with polyhedral symmetry. PMID:24516137

  1. Fourth class of convex equilateral polyhedron with polyhedral symmetry related to fullerenes and viruses.

    PubMed

    Schein, Stan; Gayed, James Maurice

    2014-02-25

    The three known classes of convex polyhedron with equal edge lengths and polyhedral symmetry--tetrahedral, octahedral, and icosahedral--are the 5 Platonic polyhedra, the 13 Archimedean polyhedra--including the truncated icosahedron or soccer ball--and the 2 rhombic polyhedra reported by Johannes Kepler in 1611. (Some carbon fullerenes, inorganic cages, icosahedral viruses, geodesic structures, and protein complexes resemble these fundamental shapes.) Here we add a fourth class, "Goldberg polyhedra," which are also convex and equilateral. We begin by decorating each of the triangular facets of a tetrahedron, an octahedron, or an icosahedron with the T vertices and connecting edges of a "Goldberg triangle." We obtain the unique set of internal angles in each planar face of each polyhedron by solving a system of n equations and n variables, where the equations set the dihedral angle discrepancy about different types of edge to zero, and the variables are a subset of the internal angles in 6gons. Like the faces in Kepler's rhombic polyhedra, the 6gon faces in Goldberg polyhedra are equilateral and planar but not equiangular. We show that there is just a single tetrahedral Goldberg polyhedron, a single octahedral one, and a systematic, countable infinity of icosahedral ones, one for each Goldberg triangle. Unlike carbon fullerenes and faceted viruses, the icosahedral Goldberg polyhedra are nearly spherical. The reasoning and techniques presented here will enable discovery of still more classes of convex equilateral polyhedra with polyhedral symmetry.

  2. Safe Onboard Guidance and Control Under Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James

    2011-01-01

    An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.

  3. Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.

  4. ACCELERATED FITTING OF STELLAR SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ting, Yuan-Sen; Conroy, Charlie; Rix, Hans-Walter

    2016-07-20

    Stellar spectra are often modeled and fitted by interpolating within a rectilinear grid of synthetic spectra to derive the stars’ labels: stellar parameters and elemental abundances. However, the number of synthetic spectra needed for a rectilinear grid grows exponentially with the label space dimensions, precluding the simultaneous and self-consistent fitting of more than a few elemental abundances. Shortcuts such as fitting subsets of labels separately can introduce unknown systematics and do not produce correct error covariances in the derived labels. In this paper we present a new approach—Convex Hull Adaptive Tessellation (chat)—which includes several new ideas for inexpensively generating amore » sufficient stellar synthetic library, using linear algebra and the concept of an adaptive, data-driven grid. A convex hull approximates the region where the data lie in the label space. A variety of tests with mock data sets demonstrate that chat can reduce the number of required synthetic model calculations by three orders of magnitude in an eight-dimensional label space. The reduction will be even larger for higher dimensional label spaces. In chat the computational effort increases only linearly with the number of labels that are fit simultaneously. Around each of these grid points in the label space an approximate synthetic spectrum can be generated through linear expansion using a set of “gradient spectra” that represent flux derivatives at every wavelength point with respect to all labels. These techniques provide new opportunities to fit the full stellar spectra from large surveys with 15–30 labels simultaneously.« less

  5. Efficient isoparametric integration over arbitrary space-filling Voronoi polyhedra for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Aftab; Khan, S. N.; Wilson, Brian G.

    2011-07-06

    A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less

  6. Algorithms for Mathematical Programming with Emphasis on Bi-level Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldfarb, Donald; Iyengar, Garud

    2014-05-22

    The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.

  7. Numerical algorithms for scatter-to-attenuation reconstruction in PET: empirical comparison of convergence, acceleration, and the effect of subsets.

    PubMed

    Berker, Yannick; Karp, Joel S; Schulz, Volkmar

    2017-09-01

    The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.

  8. A Fixed Point Theorem in Weak Topology for Successively Recurrent System of Set-Valued Mapping Equations and Its Applications

    NASA Astrophysics Data System (ADS)

    Horiuchi, Kazuo

    Let us introduce n (≥ 2) mappings fi(i = 1, …, n ≡ 0) defined on reflexive real Banach spaces Xi-1 and let fi : Xi-1 → Yi be completely continuous on bounded convex closed subsets X_{i-1}^{(0)} \\\\subset X_{i-1}. Moreover, let us introduce n set-valued mappings F_i : X_{i-1} \\\\times Y_i \\\\to {\\\\cal F}_c(X_i) (the family of all non-empty compact subsets of Xi), (i=1, …, n ≡ 0). Here, we have a fixed point theorem in weak topology on the successively recurrent system of set-valued mapping equations: xi ∈ Fi(xi-1, fi(xi-1)), (i=1, …, n ≡ 0). This theorem can be applied immediately to analysis of the availability of system of circular networks of channels undergone by uncertain fluctuations and to evaluation of the tolerability of behaviors of those systems.

  9. The discrete one-sided Lipschitz condition for convex scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Brenier, Yann; Osher, Stanley

    1986-01-01

    Physical solutions to convex scalar conservation laws satisfy a one-sided Lipschitz condition (OSLC) that enforces both the entropy condition and their variation boundedness. Consistency with this condition is therefore desirable for a numerical scheme and was proved for both the Godunov and the Lax-Friedrichs scheme--also, in a weakened version, for the Roe scheme, all of them being only first order accurate. A new, fully second order scheme is introduced here, which is consistent with the OSLC. The modified equation is considered and shows interesting features. Another second order scheme is then considered and numerical results are discussed.

  10. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    NASA Astrophysics Data System (ADS)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  11. Evolutionary variational-hemivariational inequalities

    NASA Astrophysics Data System (ADS)

    Carl, Siegfried; Le, Vy K.; Motreanu, Dumitru

    2008-09-01

    We consider an evolutionary quasilinear hemivariational inequality under constraints represented by some closed and convex subset. Our main goal is to systematically develop the method of sub-supersolution on the basis of which we then prove existence, comparison, compactness and extremality results. The obtained results are applied to a general obstacle problem. We improve the corresponding results in the recent monograph [S. Carl, V.K. Le, DE Motreanu, Nonsmooth Variational Problems and Their Inequalities. Comparison Principles and Applications, Springer Monogr. Math., Springer, New York, 2007].

  12. Investigating Evolutionary Conservation of Dendritic Cell Subset Identity and Functions

    PubMed Central

    Vu Manh, Thien-Phong; Bertho, Nicolas; Hosmalin, Anne; Schwartz-Cornil, Isabelle; Dalod, Marc

    2015-01-01

    Dendritic cells (DCs) were initially defined as mononuclear phagocytes with a dendritic morphology and an exquisite efficiency for naïve T-cell activation. DC encompass several subsets initially identified by their expression of specific cell surface molecules and later shown to excel in distinct functions and to develop under the instruction of different transcription factors or cytokines. Very few cell surface molecules are expressed in a specific manner on any immune cell type. Hence, to identify cell types, the sole use of a small number of cell surface markers in classical flow cytometry can be deceiving. Moreover, the markers currently used to define mononuclear phagocyte subsets vary depending on the tissue and animal species studied and even between laboratories. This has led to confusion in the definition of DC subset identity and in their attribution of specific functions. There is a strong need to identify a rigorous and consensus way to define mononuclear phagocyte subsets, with precise guidelines potentially applicable throughout tissues and species. We will discuss the advantages, drawbacks, and complementarities of different methodologies: cell surface phenotyping, ontogeny, functional characterization, and molecular profiling. We will advocate that gene expression profiling is a very rigorous, largely unbiased and accessible method to define the identity of mononuclear phagocyte subsets, which strengthens and refines surface phenotyping. It is uniquely powerful to yield new, experimentally testable, hypotheses on the ontogeny or functions of mononuclear phagocyte subsets, their molecular regulation, and their evolutionary conservation. We propose defining cell populations based on a combination of cell surface phenotyping, expression analysis of hallmark genes, and robust functional assays, in order to reach a consensus and integrate faster the huge but scattered knowledge accumulated by different laboratories on different cell types, organs, and species. PMID:26082777

  13. PANTHER. Trajectory Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rintoul, Mark Daniel; Wilson, Andrew T.; Valicka, Christopher G.

    We want to organize a body of trajectories in order to identify, search for, classify and predict behavior among objects such as aircraft and ships. Existing compari- son functions such as the Fr'echet distance are computationally expensive and yield counterintuitive results in some cases. We propose an approach using feature vectors whose components represent succinctly the salient information in trajectories. These features incorporate basic information such as total distance traveled and distance be- tween start/stop points as well as geometric features related to the properties of the convex hull, trajectory curvature and general distance geometry. Additionally, these features can generallymore » be mapped easily to behaviors of interest to humans that are searching large databases. Most of these geometric features are invariant under rigid transformation. We demonstrate the use of different subsets of these features to iden- tify trajectories similar to an exemplar, cluster a database of several hundred thousand trajectories, predict destination and apply unsupervised machine learning algorithms.« less

  14. Trajectory analysis via a geometric feature space approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rintoul, Mark D.; Wilson, Andrew T.

    This study aimed to organize a body of trajectories in order to identify, search for and classify both common and uncommon behaviors among objects such as aircraft and ships. Existing comparison functions such as the Fréchet distance are computationally expensive and yield counterintuitive results in some cases. We propose an approach using feature vectors whose components represent succinctly the salient information in trajectories. These features incorporate basic information such as the total distance traveled and the distance between start/stop points as well as geometric features related to the properties of the convex hull, trajectory curvature and general distance geometry. Additionally,more » these features can generally be mapped easily to behaviors of interest to humans who are searching large databases. Most of these geometric features are invariant under rigid transformation. Furthermore, we demonstrate the use of different subsets of these features to identify trajectories similar to an exemplar, cluster a database of several hundred thousand trajectories and identify outliers.« less

  15. Trajectory analysis via a geometric feature space approach

    DOE PAGES

    Rintoul, Mark D.; Wilson, Andrew T.

    2015-10-05

    This study aimed to organize a body of trajectories in order to identify, search for and classify both common and uncommon behaviors among objects such as aircraft and ships. Existing comparison functions such as the Fréchet distance are computationally expensive and yield counterintuitive results in some cases. We propose an approach using feature vectors whose components represent succinctly the salient information in trajectories. These features incorporate basic information such as the total distance traveled and the distance between start/stop points as well as geometric features related to the properties of the convex hull, trajectory curvature and general distance geometry. Additionally,more » these features can generally be mapped easily to behaviors of interest to humans who are searching large databases. Most of these geometric features are invariant under rigid transformation. Furthermore, we demonstrate the use of different subsets of these features to identify trajectories similar to an exemplar, cluster a database of several hundred thousand trajectories and identify outliers.« less

  16. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  17. Quasilinear parabolic variational inequalities with multi-valued lower-order terms

    NASA Astrophysics Data System (ADS)

    Carl, Siegfried; Le, Vy K.

    2014-10-01

    In this paper, we provide an analytical frame work for the following multi-valued parabolic variational inequality in a cylindrical domain : Find and an such that where is some closed and convex subset, A is a time-dependent quasilinear elliptic operator, and the multi-valued function is assumed to be upper semicontinuous only, so that Clarke's generalized gradient is included as a special case. Thus, parabolic variational-hemivariational inequalities are special cases of the problem considered here. The extension of parabolic variational-hemivariational inequalities to the general class of multi-valued problems considered in this paper is not only of disciplinary interest, but is motivated by the need in applications. The main goals are as follows. First, we provide an existence theory for the above-stated problem under coercivity assumptions. Second, in the noncoercive case, we establish an appropriate sub-supersolution method that allows us to get existence, comparison, and enclosure results. Third, the order structure of the solution set enclosed by sub-supersolutions is revealed. In particular, it is shown that the solution set within the sector of sub-supersolutions is a directed set. As an application, a multi-valued parabolic obstacle problem is treated.

  18. The contour-buildup algorithm to calculate the analytical molecular surface.

    PubMed

    Totrov, M; Abagyan, R

    1996-01-01

    A new algorithm is presented to calculate the analytical molecular surface defined as a smooth envelope traced out by the surface of a probe sphere rolled over the molecule. The core of the algorithm is the sequential build up of multi-arc contours on the van der Waals spheres. This algorithm yields substantial reduction in both memory and time requirements of surface calculations. Further, the contour-buildup principle is intrinsically "local", which makes calculations of the partial molecular surfaces even more efficient. Additionally, the algorithm is equally applicable not only to convex patches, but also to concave triangular patches which may have complex multiple intersections. The algorithm permits the rigorous calculation of the full analytical molecular surface for a 100-residue protein in about 2 seconds on an SGI indigo with R4400++ processor at 150 Mhz, with the performance scaling almost linearly with the protein size. The contour-buildup algorithm is faster than the original Connolly algorithm an order of magnitude.

  19. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  20. Data reduction using cubic rational B-splines

    NASA Technical Reports Server (NTRS)

    Chou, Jin J.; Piegl, Les A.

    1992-01-01

    A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.

  1. Measurement-Device-Independent Approach to Entanglement Measures

    NASA Astrophysics Data System (ADS)

    Shahandeh, Farid; Hall, Michael J. W.; Ralph, Timothy C.

    2017-04-01

    Within the context of semiquantum nonlocal games, the trust can be removed from the measurement devices in an entanglement-detection procedure. Here, we show that a similar approach can be taken to quantify the amount of entanglement. To be specific, first, we show that in this context, a small subset of semiquantum nonlocal games is necessary and sufficient for entanglement detection in the local operations and classical communication paradigm. Second, we prove that the maximum payoff for these games is a universal measure of entanglement which is convex and continuous. Third, we show that for the quantification of negative-partial-transpose entanglement, this subset can be further reduced down to a single arbitrary element. Importantly, our measure is measurement device independent by construction and operationally accessible. Finally, our approach straightforwardly extends to quantify the entanglement within any partitioning of multipartite quantum states.

  2. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  3. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  4. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  5. Fast and accurate matrix completion via truncated nuclear norm regularization.

    PubMed

    Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei

    2013-09-01

    Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

  6. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  7. Higher order sensitivity of solutions to convex programming problems without strict complementarity

    NASA Technical Reports Server (NTRS)

    Malanowski, Kazimierz

    1988-01-01

    Consideration is given to a family of convex programming problems which depend on a vector parameter. It is shown that the solutions of the problems and the associated Lagrange multipliers are arbitrarily many times directionally differentiable functions of the parameter, provided that the data of the problems are sufficiently regular. The characterizations of the respective derivatives are given.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L; Han, Y; Jin, M

    Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less

  9. Convex Clustering: An Attractive Alternative to Hierarchical Clustering

    PubMed Central

    Chen, Gary K.; Chi, Eric C.; Ranola, John Michael O.; Lange, Kenneth

    2015-01-01

    The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/ PMID:25965340

  10. Convex clustering: an attractive alternative to hierarchical clustering.

    PubMed

    Chen, Gary K; Chi, Eric C; Ranola, John Michael O; Lange, Kenneth

    2015-05-01

    The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/.

  11. Evaluating convex roof entanglement measures.

    PubMed

    Tóth, Géza; Moroder, Tobias; Gühne, Otfried

    2015-04-24

    We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.

  12. A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING.

    PubMed

    Bingsheng, He; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming

    2014-07-01

    In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O (1/ t ). A worst-case O (1/ t ) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O (1/ t ) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.

  13. Constrained spacecraft reorientation using mixed integer convex programming

    NASA Astrophysics Data System (ADS)

    Tam, Margaret; Glenn Lightsey, E.

    2016-10-01

    A constrained attitude guidance (CAG) system is developed using convex optimization to autonomously achieve spacecraft pointing objectives while meeting the constraints imposed by on-board hardware. These constraints include bounds on the control input and slew rate, as well as pointing constraints imposed by the sensors. The pointing constraints consist of inclusion and exclusion cones that dictate permissible orientations of the spacecraft in order to keep objects in or out of the field of view of the sensors. The optimization scheme drives a body vector towards a target inertial vector along a trajectory that consists solely of permissible orientations in order to achieve the desired attitude for a given mission mode. The non-convex rotational kinematics are handled by discretization, which also ensures that the quaternion stays unity norm. In order to guarantee an admissible path, the pointing constraints are relaxed. Depending on how strict the pointing constraints are, the degree of relaxation is tuneable. The use of binary variables permits the inclusion of logical expressions in the pointing constraints in the case that a set of sensors has redundancies. The resulting mixed integer convex programming (MICP) formulation generates a steering law that can be easily integrated into an attitude determination and control (ADC) system. A sample simulation of the system is performed for the Bevo-2 satellite, including disturbance torques and actuator dynamics which are not modeled by the controller. Simulation results demonstrate the robustness of the system to disturbances while meeting the mission requirements with desirable performance characteristics.

  14. Reflective optical imaging system

    DOEpatents

    Shafer, David R.

    2000-01-01

    An optical system compatible with short wavelength (extreme ultraviolet) radiation comprising four reflective elements for projecting a mask image onto a substrate. The four optical elements are characterized in order from object to image as convex, concave, convex and concave mirrors. The optical system is particularly suited for step and scan lithography methods. The invention increases the slit dimensions associated with ringfield scanning optics, improves wafer throughput and allows higher semiconductor device density.

  15. Reflective optical imaging method and circuit

    DOEpatents

    Shafer, David R.

    2001-01-01

    An optical system compatible with short wavelength (extreme ultraviolet) radiation comprising four reflective elements for projecting a mask image onto a substrate. The four optical elements are characterized in order from object to image as convex, concave, convex and concave mirrors. The optical system is particularly suited for step and scan lithography methods. The invention increases the slit dimensions associated with ringfield scanning optics, improves wafer throughput and allows higher semiconductor device density.

  16. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    NASA Astrophysics Data System (ADS)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  17. Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.

    PubMed

    Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe

    2015-01-15

    Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  18. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Weak convergence of a projection algorithm for variational inequalities in a Banach space

    NASA Astrophysics Data System (ADS)

    Iiduka, Hideaki; Takahashi, Wataru

    2008-03-01

    Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.

  20. Exploring metabolic pathways in genome-scale networks via generating flux modes.

    PubMed

    Rezola, A; de Figueiredo, L F; Brock, M; Pey, J; Podhorski, A; Wittmann, C; Schuster, S; Bockmayr, A; Planes, F J

    2011-02-15

    The reconstruction of metabolic networks at the genome scale has allowed the analysis of metabolic pathways at an unprecedented level of complexity. Elementary flux modes (EFMs) are an appropriate concept for such analysis. However, their number grows in a combinatorial fashion as the size of the metabolic network increases, which renders the application of EFMs approach to large metabolic networks difficult. Novel methods are expected to deal with such complexity. In this article, we present a novel optimization-based method for determining a minimal generating set of EFMs, i.e. a convex basis. We show that a subset of elements of this convex basis can be effectively computed even in large metabolic networks. Our method was applied to examine the structure of pathways producing lysine in Escherichia coli. We obtained a more varied and informative set of pathways in comparison with existing methods. In addition, an alternative pathway to produce lysine was identified using a detour via propionyl-CoA, which shows the predictive power of our novel approach. The source code in C++ is available upon request.

  1. The Role of Hellinger Processes in Mathematical Finance

    NASA Astrophysics Data System (ADS)

    Choulli, T.; Hurd, T. R.

    2001-09-01

    This paper illustrates the natural role that Hellinger processes can play in solving problems from ¯nance. We propose an extension of the concept of Hellinger process applicable to entropy distance and f-divergence distances, where f is a convex logarithmic function or a convex power function with general order q, 0 6= q < 1. These concepts lead to a new approach to Merton's optimal portfolio problem and its dual in general L¶evy markets.

  2. Preconditioning 2D Integer Data for Fast Convex Hull Computations.

    PubMed

    Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

  3. Shear thickening and jamming in suspensions of different particle shapes

    NASA Astrophysics Data System (ADS)

    Brown, Eric; Zhang, Hanjun; Forman, Nicole; Betts, Douglas; Desimone, Joseph; Maynor, Benjamin; Jaeger, Heinrich

    2012-02-01

    We investigated the role of particle shape on shear thickening and jamming in densely packed suspensions. Various particle shapes were fabricated including rods of different aspect ratios and non-convex hooked rods. A rheometer was used to measure shear stress vs. shear rate for a wide range of packing fractions for each shape. Each suspensions exhibits qualitatively similar Discontinuous Shear Thickening, in which the logarithmic slope of the stress vs. shear rate has the same scaling for each convex shape and diverges at a critical packing fraction φc. The value of φc varies with particle shape, and coincides with the onset of a yield stress, a.k.a. the jamming transition. This suggests the jamming transition controls shear thickening, and the only effect of particle shape on steady state bulk rheology of convex particles is a shift of φc. Intriguingly, viscosity curves for non-convex particles do not collapse on the same set as convex particles, showing strong shear thickening over a wider range of packing fraction. Qualitative shape dependence was only found in steady state rheology when the system was confined to small gaps where large aspect ratio particle are forced to order.

  4. Detection of mouse liver cancer via a parallel iterative shrinkage method in hybrid optical/microcomputed tomography imaging

    NASA Astrophysics Data System (ADS)

    Wu, Ping; Liu, Kai; Zhang, Qian; Xue, Zhenwen; Li, Yongbao; Ning, Nannan; Yang, Xin; Li, Xingde; Tian, Jie

    2012-12-01

    Liver cancer is one of the most common malignant tumors worldwide. In order to enable the noninvasive detection of small liver tumors in mice, we present a parallel iterative shrinkage (PIS) algorithm for dual-modality tomography. It takes advantage of microcomputed tomography and multiview bioluminescence imaging, providing anatomical structure and bioluminescence intensity information to reconstruct the size and location of tumors. By incorporating prior knowledge of signal sparsity, we associate some mathematical strategies including specific smooth convex approximation, an iterative shrinkage operator, and affine subspace with the PIS method, which guarantees the accuracy, efficiency, and reliability for three-dimensional reconstruction. Then an in vivo experiment on the bead-implanted mouse has been performed to validate the feasibility of this method. The findings indicate that a tiny lesion less than 3 mm in diameter can be localized with a position bias no more than 1 mm the computational efficiency is one to three orders of magnitude faster than the existing algorithms; this approach is robust to the different regularization parameters and the lp norms. Finally, we have applied this algorithm to another in vivo experiment on an HCCLM3 orthotopic xenograft mouse model, which suggests the PIS method holds the promise for practical applications of whole-body cancer detection.

  5. A STRICTLY CONTRACTIVE PEACEMAN–RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING

    PubMed Central

    BINGSHENG, HE; LIU, HAN; WANG, ZHAORAN; YUAN, XIAOMING

    2014-01-01

    In this paper, we focus on the application of the Peaceman–Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas–Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing. PMID:25620862

  6. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  7. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  8. Kraken: ultrafast metagenomic sequence classification using exact alignments

    PubMed Central

    2014-01-01

    Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/. PMID:24580807

  9. Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2009-12-01

    We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.

  10. Generalized vector calculus on convex domain

    NASA Astrophysics Data System (ADS)

    Agrawal, Om P.; Xu, Yufeng

    2015-06-01

    In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.

  11. Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization.

    PubMed

    Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford

    2018-04-01

    Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.

  12. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation

    PubMed Central

    Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.

    2014-01-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  13. Structural Evolution and Kinetics in Cu-Zr Metallic Liquids from Molecular Dynamics Simulations (Postprint)

    DTIC Science & Technology

    2013-10-23

    compensate for overcounting due to numerical issues inherent in the tessellation.16 The shape of the coordination polyhedron was determined by the shape...work by Yang et al.21 The total volume can be determined by finding the volume of the convex polyhedron whose vertices are given by the centers of...atoms in the nearest-neighbor shell. In order to determine the volume of the atoms inside the clusters, the convex hull polyhedron is first segmented

  14. Organizing principles for dense packings of nonspherical hard particles: Not all shapes are created equal

    NASA Astrophysics Data System (ADS)

    Torquato, Salvatore; Jiao, Yang

    2012-07-01

    We have recently devised organizing principles to obtain maximally dense packings of the Platonic and Archimedean solids and certain smoothly shaped convex nonspherical particles [Torquato and Jiao, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.041310 81, 041310 (2010)]. Here we generalize them in order to guide one to ascertain the densest packings of other convex nonspherical particles as well as concave shapes. Our generalized organizing principles are explicitly stated as four distinct propositions. All of our organizing principles are applied to and tested against the most comprehensive set of both convex and concave particle shapes examined to date, including Catalan solids, prisms, antiprisms, cylinders, dimers of spheres, and various concave polyhedra. We demonstrate that all of the densest known packings associated with this wide spectrum of nonspherical particles are consistent with our propositions. Among other applications, our general organizing principles enable us to construct analytically the densest known packings of certain convex nonspherical particles, including spherocylinders, “lens-shaped” particles, square pyramids, and rhombic pyramids. Moreover, we show how to apply these principles to infer the high-density equilibrium crystalline phases of hard convex and concave particles. We also discuss the unique packing attributes of maximally random jammed packings of nonspherical particles.

  15. Preconditioning 2D Integer Data for Fast Convex Hull Computations

    PubMed Central

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221

  16. Asymmetric Bulkheads for Cylindrical Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Ford, Donald B.

    2007-01-01

    Asymmetric bulkheads are proposed for the ends of vertically oriented cylindrical pressure vessels. These bulkheads, which would feature both convex and concave contours, would offer advantages over purely convex, purely concave, and flat bulkheads (see figure). Intended originally to be applied to large tanks that hold propellant liquids for launching spacecraft, the asymmetric-bulkhead concept may also be attractive for terrestrial pressure vessels for which there are requirements to maximize volumetric and mass efficiencies. A description of the relative advantages and disadvantages of prior symmetric bulkhead configurations is prerequisite to understanding the advantages of the proposed asymmetric configuration: In order to obtain adequate strength, flat bulkheads must be made thicker, relative to concave and convex bulkheads; the difference in thickness is such that, other things being equal, pressure vessels with flat bulkheads must be made heavier than ones with concave or convex bulkheads. Convex bulkhead designs increase overall tank lengths, thereby necessitating additional supporting structure for keeping tanks vertical. Concave bulkhead configurations increase tank lengths and detract from volumetric efficiency, even though they do not necessitate additional supporting structure. The shape of a bulkhead affects the proportion of residual fluid in a tank that is, the portion of fluid that unavoidably remains in the tank during outflow and hence cannot be used. In this regard, a flat bulkhead is disadvantageous in two respects: (1) It lacks a single low point for optimum placement of an outlet and (2) a vortex that forms at the outlet during outflow prevents a relatively large amount of fluid from leaving the tank. A concave bulkhead also lacks a single low point for optimum placement of an outlet. Like purely concave and purely convex bulkhead configurations, the proposed asymmetric bulkhead configurations would be more mass-efficient than is the flat bulkhead configuration. In comparison with both purely convex and purely concave configurations, the proposed asymmetric configurations would offer greater volumetric efficiency. Relative to a purely convex bulkhead configuration, the corresponding asymmetric configuration would result in a shorter tank, thus demanding less supporting structure. An asymmetric configuration provides a low point for optimum location of a drain, and the convex shape at the drain location minimizes the amount of residual fluid.

  17. Hard convex lens-shaped particles: Densest-known packings and phase behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinacchi, Giorgio, E-mail: giorgio.cinacchi@uam.es; Torquato, Salvatore, E-mail: torquato@princeton.edu

    2015-12-14

    By using theoretical methods and Monte Carlo simulations, this work investigates dense ordered packings and equilibrium phase behavior (from the low-density isotropic fluid regime to the high-density crystalline solid regime) of monodisperse systems of hard convex lens-shaped particles as defined by the volume common to two intersecting congruent spheres. We show that, while the overall similarity of their shape to that of hard oblate ellipsoids is reflected in a qualitatively similar phase diagram, differences are more pronounced in the high-density crystal phase up to the densest-known packings determined here. In contrast to those non-(Bravais)-lattice two-particle basis crystals that are themore » densest-known packings of hard (oblate) ellipsoids, hard convex lens-shaped particles pack more densely in two types of degenerate crystalline structures: (i) non-(Bravais)-lattice two-particle basis body-centered-orthorhombic-like crystals and (ii) (Bravais) lattice monoclinic crystals. By stacking at will, regularly or irregularly, laminae of these two crystals, infinitely degenerate, generally non-periodic in the stacking direction, dense packings can be constructed that are consistent with recent organizing principles. While deferring the assessment of which of these dense ordered structures is thermodynamically stable in the high-density crystalline solid regime, the degeneracy of their densest-known packings strongly suggests that colloidal convex lens-shaped particles could be better glass formers than colloidal spheres because of the additional rotational degrees of freedom.« less

  18. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr; Li, Juan, E-mail: juanli@sdu.edu.cn; Ma, Jin, E-mail: jinma@usc.edu

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and wemore » extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.« less

  20. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  1. Riemannian and Lorentzian flow-cut theorems

    NASA Astrophysics Data System (ADS)

    Headrick, Matthew; Hubeny, Veronika E.

    2018-05-01

    We prove several geometric theorems using tools from the theory of convex optimization. In the Riemannian setting, we prove the max flow-min cut (MFMC) theorem for boundary regions, applied recently to develop a ‘bit-thread’ interpretation of holographic entanglement entropies. We also prove various properties of the max flow and min cut, including respective nesting properties. In the Lorentzian setting, we prove the analogous MFMC theorem, which states that the volume of a maximal slice equals the flux of a minimal flow, where a flow is defined as a divergenceless timelike vector field with norm at least 1. This theorem includes as a special case a continuum version of Dilworth’s theorem from the theory of partially ordered sets. We include a brief review of the necessary tools from the theory of convex optimization, in particular Lagrangian duality and convex relaxation.

  2. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware.

    PubMed

    Kole, J S; Beekman, F J

    2006-02-21

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  3. Figural properties are prioritized for search under conditions of uncertainty: Setting boundary conditions on claims that figures automatically attract attention.

    PubMed

    Peterson, Mary A; Mojica, Andrew J; Salvagio, Elizabeth; Kimchi, Ruth

    2017-01-01

    Nelson and Palmer (2007) concluded that figures/figural properties automatically attract attention, after they found that participants were faster to detect/discriminate targets appearing where a portion of a familiar object was suggested in an otherwise ambiguous display. We investigated whether these effects are truly automatic and whether they generalize to another figural property-convexity. We found that Nelson and Palmer's results do generalize to convexity, but only when participants are uncertain regarding when and where the target will appear. Dependence on uncertainty regarding target location/timing was also observed for familiarity. Thus, although we could replicate and extend Nelson and Palmer's results, our experiments showed that figures do not automatically draw attention. In addition, our research went beyond Nelson and Palmer's, in that we were able to separate figural properties from perceived figures. Because figural properties are regularities that predict where objects lie in the visual field, our results join other evidence that regularities in the environment can attract attention. More generally, our results are consistent with Bayesian theories in which priors are given more weight under conditions of uncertainty.

  4. Robust Controller Design: Minimizing Peak-to-Peak Gain

    DTIC Science & Technology

    1992-09-01

    hold, i.e., that p(M) > 1. The Perron - Frobenius theory for nonnegative matrices states that p(M) is itself an eigenvalue of M. Moreover, associated...vector space. A convex cone P is a convex set such that if x E P then ax E P for all real ac > 0. Given such P, it is possible to define an ordering...relation on X as follows: x > y if and only if x - y E P. Then it is natural to define a dual cone P* (with an abuse of notation) inside X* in the

  5. Reflective optical imaging systems with balanced distortion

    DOEpatents

    Hudyma, Russell M.

    2001-01-01

    Optical systems compatible with extreme ultraviolet radiation comprising four reflective elements for projecting a mask image onto a substrate are described. The four optical elements comprise, in order from object to image, convex, concave, convex and concave mirrors. The optical systems are particularly suited for step and scan lithography methods. The invention enables the use of larger slit dimensions associated with ring field scanning optics, improves wafer throughput, and allows higher semiconductor device density. The inventive optical systems are characterized by reduced dynamic distortion because the static distortion is balanced across the slit width.

  6. Shape complexes: the intersection of label orderings and star convexity constraints in continuous max-flow medical image segmentation

    PubMed Central

    Baxter, John S. H.; Inoue, Jiro; Drangova, Maria; Peters, Terry M.

    2016-01-01

    Abstract. Optimization-based segmentation approaches deriving from discrete graph-cuts and continuous max-flow have become increasingly nuanced, allowing for topological and geometric constraints on the resulting segmentation while retaining global optimality. However, these two considerations, topological and geometric, have yet to be combined in a unified manner. The concept of “shape complexes,” which combine geodesic star convexity with extendable continuous max-flow solvers, is presented. These shape complexes allow more complicated shapes to be created through the use of multiple labels and super-labels, with geodesic star convexity governed by a topological ordering. These problems can be optimized using extendable continuous max-flow solvers. Previous approaches required computationally expensive coordinate system warping, which are ill-defined and ambiguous in the general case. These shape complexes are demonstrated in a set of synthetic images as well as vessel segmentation in ultrasound, valve segmentation in ultrasound, and atrial wall segmentation from contrast-enhanced CT. Shape complexes represent an extendable tool alongside other continuous max-flow methods that may be suitable for a wide range of medical image segmentation problems. PMID:28018937

  7. Suppressing Ghost Diffraction in E-Beam-Written Gratings

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel; Backlund, Johan

    2009-01-01

    A modified scheme for electron-beam (E-beam) writing used in the fabrication of convex or concave diffraction gratings makes it possible to suppress the ghost diffraction heretofore exhibited by such gratings. Ghost diffraction is a spurious component of diffraction caused by a spurious component of grating periodicity as described below. The ghost diffraction orders appear between the main diffraction orders and are typically more intense than is the diffuse scattering from the grating. At such high intensity, ghost diffraction is the dominant source of degradation of grating performance. The pattern of a convex or concave grating is established by electron-beam writing in a resist material coating a substrate that has the desired convex or concave shape. Unfortunately, as a result of the characteristics of electrostatic deflectors used to control the electron beam, it is possible to expose only a small field - typically between 0.5 and 1.0 mm wide - at a given fixed position of the electron gun relative to the substrate. To make a grating larger than the field size, it is necessary to move the substrate to make it possible to write fields centered at different positions, so that the larger area is synthesized by "stitching" the exposed fields.

  8. Virial Coefficients and Equations of State for Hard Polyhedron Fluids.

    PubMed

    Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C

    2017-10-24

    Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.

  9. A new convexity measure for polygons.

    PubMed

    Zunic, Jovisa; Rosin, Paul L

    2004-07-01

    Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.

  10. Water Dynamics in Gyroid Phases of Self-Assembled Gemini Surfactants

    DOE PAGES

    Roy, Santanu; Skoff, David; Perroni, Dominic V.; ...

    2016-02-14

    Water-mediated ion transport through functional nanoporous materials depends on the dynamics of water confined within a given nanostructured morphology. In this study, we investigate hydrogen-bonding dynamics of interfacial water within a ‘normal’ (Type I) lyotropic gyroid phase formed by a gemini dicarboxylate surfactant self-assembly using a combina- tion of 2DIR spectroscopy and molecular dynamics simulations. Experiments and simulations demonstrate that water dynamics in the normal gyroid phase is one order of magnitude slower than that in bulk water, due to specific interactions between water, the ionic surfactant headgroups, and counterions. However, the dynamics of water in the normal gyroid phasemore » are faster than those of water confined in a reverse spherical micelle of a sulfonate surfactant, given that the water pool in the reverse micelle and the water pore in the gyroid phase have roughly the same diameters. This difference in confined water dynamics likely arises from the significantly reduced curvature- induced frustration at the convex interfaces of the normal gyroid, as compared to the concave interfaces of a reverse spherical micelle. These detailed insights into confined water dynamics may guide the future design of artificial membranes that rapidly transport protons and other ions.« less

  11. Convex relaxations for gas expansion planning

    DOE PAGES

    Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...

    2016-01-01

    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less

  12. Transferable Output ASCII Data (TOAD) editor version 1.0 user's guide

    NASA Technical Reports Server (NTRS)

    Bingel, Bradford D.; Shea, Anne L.; Hofler, Alicia S.

    1991-01-01

    The Transferable Output ASCII Data (TOAD) editor is an interactive software tool for manipulating the contents of TOAD files. The TOAD editor is specifically designed to work with tabular data. Selected subsets of data may be displayed to the user's screen, sorted, exchanged, duplicated, removed, replaced, inserted, or transferred to and from external files. It also offers a number of useful features including on-line help, macros, a command history, an 'undo' option, variables, and a full compliment of mathematical functions and conversion factors. Written in ANSI FORTRAN 77 and completely self-contained, the TOAD editor is very portable and has already been installed on SUN, SGI/IRIS, and CONVEX hosts.

  13. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.

  14. Structures in color space

    NASA Astrophysics Data System (ADS)

    Petrov, Alexander P.

    1996-09-01

    Classic colorimetry and the traditionally used color space do not represent all perceived colors (for example, browns look dark yellow in colorimetric conditions of observation) so, the specific goal of this work is to suggest another concept of color and to prove that the corresponding set of colors is complete. The idea of our approach attributing color to surface patches (not to the light) immediately ties all the problems of color perception and vision geometry. The equivalence relation in the linear space of light fluxes F established by a procedure of colorimetry gives us a 3D color space H. By definition we introduce a sample (sigma) (surface patch) as a linear mapping (sigma) : L yields H, where L is a subspace of F called the illumination space. A Dedekind structure of partial order can be defined in the set of the samples: two samples (alpha) and (Beta) belong to one chromatic class if ker(alpha) equals ker(Beta) and (alpha) > (Beta) if ker(alpha) ker(Beta) . The maximal elements of this chain create the chromatic class BLACK. There can be given geometrical arguments for L to be 3D and it can be proved that in this case the minimal element of the above Dedekind structure is unique and the corresponding chromatic class is called WHITE containing the samples (omega) such that ker(omega) equals {0} L. Color is defined as mapping C: H yields H and assuming color constancy the complete set of perceived colors is proved to be isomorphic to a subset C of 3 X 3 matrices. This subset is convex, limited and symmetrical with E/2 as the center of symmetry. The problem of metrization of the color space C is discussed and a color metric related to shape, i.e., to vision geometry, is suggested.

  15. On cell entropy inequality for discontinuous Galerkin methods

    NASA Technical Reports Server (NTRS)

    Jiang, Guangshan; Shu, Chi-Wang

    1993-01-01

    We prove a cell entropy inequality for a class of high order discontinuous Galerkin finite element methods approximating conservation laws, which implies convergence for the one dimensional scalar convex case.

  16. Present-day stress field in subduction zones: Insights from 3D viscoelastic models and data

    NASA Astrophysics Data System (ADS)

    Petricca, Patrizio; Carminati, Eugenio

    2016-01-01

    3D viscoelastic FE models were performed to investigate the impact of geometry and kinematics on the lithospheric stress in convergent margins. Generic geometries were designed in order to resemble natural subduction. Our model predictions mirror the results of previous 2D models concerning the effects of lithosphere-mantle relative flow on stress regimes, and allow a better understanding of the lateral variability of the stress field. In particular, in both upper and lower plates, stress axes orientations depend on the adopted geometry and axes rotations occur following the trench shape. Generally stress axes are oriented perpendicular or parallel to the trench, with the exception of the slab lateral tips where rotations occur. Overall compression results in the upper plate when convergence rate is faster than mantle flow rate, suggesting a major role for convergence. In the slab, along-strike tension occurs at intermediate and deeper depths (> 100 km) in case of mantle flow sustaining the sinking lithosphere and slab convex geometry facing mantle flow or in case of opposing mantle flow and slab concave geometry facing mantle flow. Along-strike compression is predicted in case of sustaining mantle flow and concave slabs or in case of opposing mantle flow and convex slabs. The slab stress field is thus controlled by the direction of impact of mantle flow onto the slab and by slab longitudinal curvature. Slab pull produces not only tension in the bending region of subducted plate but also compression where upper and lower plates are coupled. A qualitative comparison between results and data in selected subductions indicates good match for South America, Mariana and Tonga-Kermadec subductions. Discrepancies, as for Sumatra-Java, emerge due to missing geometric (e.g., occurrence of fault systems and local changes in the orientation of plate boundaries) and rheological (e.g., plasticity associated with slab bending, anisotropy) complexities in the models.

  17. Reflective optical imaging system with balanced distortion

    DOEpatents

    Chapman, Henry N.; Hudyma, Russell M.; Shafer, David R.; Sweeney, Donald W.

    1999-01-01

    An optical system compatible with short wavelength (extreme ultraviolet) An optical system compatible with short wavelength (extreme ultraviolet) radiation comprising four reflective elements for projecting a mask image onto a substrate. The four optical elements comprise, in order from object to image, convex, concave, convex and concave mirrors. The optical system is particularly suited for step and scan lithography methods. The invention enables the use of larger slit dimensions associated with ring field scanning optics, improves wafer throughput and allows higher semiconductor device density. The inventive optical system is characterized by reduced dynamic distortion because the static distortion is balanced across the slit width.

  18. Computation of convex bounds for present value functions with random payments

    NASA Astrophysics Data System (ADS)

    Ahcan, Ales; Darkiewicz, Grzegorz; Goovaerts, Marc; Hoedemakers, Tom

    2006-02-01

    In this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by developing upper and lower bounds in the convex-order sense for present value functions. Technically speaking, our methodology is an extension of the results of Dhaene et al. [Insur. Math. Econom. 31(1) (2002) 3-33, Insur. Math. Econom. 31(2) (2002) 133-161] to the case of scalar products of mutually independent random vectors.

  19. Compact multi-bounce projection system for extreme ultraviolet projection lithography

    DOEpatents

    Hudyma, Russell M.

    2002-01-01

    An optical system compatible with short wavelength (extreme ultraviolet) radiation comprising four optical elements providing five reflective surfaces for projecting a mask image onto a substrate. The five optical surfaces are characterized in order from object to image as concave, convex, concave, convex and concave mirrors. The second and fourth reflective surfaces are part of the same optical element. The optical system is particularly suited for ring field step and scan lithography methods. The invention uses aspheric mirrors to minimize static distortion and balance the static distortion across the ring field width, which effectively minimizes dynamic distortion.

  20. A vectorized Lanczos eigensolver for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1990-01-01

    The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.

  1. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  2. Equivalent Relaxations of Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, S; Low, SH; Teeraratkul, T

    2015-03-01

    Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results implymore » that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation and the chordal relaxation are equally tight and both are strictly tighter than the second-order cone relaxation. Therefore, for mesh networks, one should either solve the chordal relaxation or the SOCP relaxation, trading off tightness and the required computational effort. Simulations are used to illustrate these results.« less

  3. Joint denoising and distortion correction of atomic scale scanning transmission electron microscopy images

    NASA Astrophysics Data System (ADS)

    Berkels, Benjamin; Wirth, Benedikt

    2017-09-01

    Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.

  4. Automorphogenesis and gravitropism of plant seedlings grown under microgravity conditions

    NASA Astrophysics Data System (ADS)

    Hoson, T.; Saiki, M.; Kamisaka, S.; Yamashita, M.

    Plant seedlings exhibit automorphogenesis on clinostats. The occurrence of automorphogenesis was confirmed under microgravity in Space Shuttle STS-95 flight. Rice coleoptiles showed an inclination toward the caryopsis in the basal region and a spontaneous curvature in the same adaxial direction in the elongating region both on a three-dimensional (3-D) clinostat and in space. Both rice roots and Arabidopsis hypocotyls also showed a similar morphology in space and on the 3-D clinostat. In rice coleoptiles, the mechanisms inducing such an automorphic curvature were studied. The faster-expanding convex side of rice coleoptiles showed a higher extensibility of the cell wall than the opposite side. Also, in the convex side, the cell wall thickness was smaller, the turnover of the matrix polysaccharides was more active, and the microtubules oriented more transversely than the concave side, and these differences appear to be causes of the curvature. When rice coleoptiles grown on the 3-D clinostat were placed horizontally, the gravitropic curvature was delayed as compared with control coleoptiles. In clinostatted coleoptiles, the corresponding suppression of the amyloplast development was also observed. Similar results were obtained in Arabidopsis hypocotyls. Thus, the induction of automorphogenesis and a concomitant decrease in graviresponsiveness occurred in plant shoots grown under microgravity conditions.

  5. Direct single-layered fabrication of 3D concavo convex patterns in nano-stereolithography

    NASA Astrophysics Data System (ADS)

    Lim, T. W.; Park, S. H.; Yang, D. Y.; Kong, H. J.; Lee, K. S.

    2006-09-01

    A nano-surfacing process (NSP) is proposed to directly fabricate three-dimensional (3D) concavo convex-shaped microstructures such as micro-lens arrays using two-photon polymerization (TPP), a promising technique for fabricating arbitrary 3D highly functional micro-devices. In TPP, commonly utilized methods for fabricating complex 3D microstructures to date are based on a layer-by-layer accumulating technique employing two-dimensional sliced data derived from 3D computer-aided design data. As such, this approach requires much time and effort for precise fabrication. In this work, a novel single-layer exposure method is proposed in order to improve the fabricating efficiency for 3D concavo convex-shaped microstructures. In the NSP, 3D microstructures are divided into 13 sub-regions horizontally with consideration of the heights. Those sub-regions are then expressed as 13 characteristic colors, after which a multi-voxel matrix (MVM) is composed with the characteristic colors. Voxels with various heights and diameters are generated to construct 3D structures using a MVM scanning method. Some 3D concavo convex-shaped microstructures were fabricated to estimate the usefulness of the NSP, and the results show that it readily enables the fabrication of single-layered 3D microstructures.

  6. Measurement system for diffraction efficiency of convex gratings

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Chen, Xin-hua; Zhou, Jian-kang; Zhao, Zhi-cheng; Liu, Quan; Luo, Chao; Wang, Xiao-feng; Tang, Min-xue; Shen, Wei-min

    2017-08-01

    A measurement system for diffraction efficiency of convex gratings is designed. The measurement system mainly includes four components as a light source, a front system, a dispersing system that contains a convex grating, and a detector. Based on the definition and measuring principle of diffraction efficiency, the optical scheme of the measurement system is analyzed and the design result is given. Then, in order to validate the feasibility of the designed system, the measurement system is set up and the diffraction efficiency of a convex grating with the aperture of 35 mm, the curvature-radius of 72mm, the blazed angle of 6.4°, the grating period of 2.5μm and the working waveband of 400nm-900nm is tested. Based on GUM (Guide to the Expression of Uncertainty in Measurement), the uncertainties in the measuring results are evaluated. The measured diffraction efficiency data are compared to the theoretical ones, which are calculated based on the grating groove parameters got by an atomic force microscope and Rigorous Couple Wave Analysis, and the reliability of the measurement system is illustrated. Finally, the measurement performance of the system is analyzed and tested. The results show that, the testing accuracy, the testing stability and the testing repeatability are 2.5%, 0.085% and 3.5% , respectively.

  7. Spatial trends in tidal flat shape and associated environmental parameters in South San Francisco Bay

    USGS Publications Warehouse

    Bearman, J.A.; Friedrichs, Carl T.; Jaffe, B.E.; Foxgrover, A.C.

    2010-01-01

    Spatial trends in the shape of profiles of South San Francisco Bay (SSFB) tidal flats are examined using bathymetric and lidar data collected in 2004 and 2005. Eigenfunction analysis reveals a dominant mode of morphologic variability related to the degree of convexity or concavity in the cross-shore profileindicative of (i) depositional, tidally dominant or (ii) erosional, wave impacted conditions. Two contrasting areas of characteristic shapenorth or south of a constriction in estuary width located near the Dumbarton Bridgeare recognized. This pattern of increasing or decreasing convexity in the inner or outer estuary is correlated to spatial variability in external and internal environmental parameters, and observational results are found to be largely consistent with theoretical expectations. Tidal flat convexity in SSFB is observed to increase (in decreasing order of significance) in response to increased deposition, increased tidal range, decreased fetch length, decreased sediment grain size, and decreased tidal flat width. ?? 2010 Coastal Education and Research Foundation.

  8. Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization

    NASA Astrophysics Data System (ADS)

    Adhikari, Sam

    2007-11-01

    Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.

  9. Slope gradient and shape effects on soil profiles in the northern mountainous forests of Iran

    NASA Astrophysics Data System (ADS)

    Fazlollahi Mohammadi, M.; Jalali, S. G. H.; Kooch, Y.; Said-Pullicino, D.

    2016-12-01

    In order to evaluate the variability of the soil profiles at two shapes (concave and convex) and five positions (summit, shoulder, back slope, footslope and toeslope) of a slope, a study of a virgin area was made in a Beech stand of mountain forests, northern Iran. Across the slope positions, the soil profiles demonstrated significant changes due to topography for two shape slopes. The solum depth of the convex slope was higher than the concave one in all five positions, and it decreased from the summit to shoulder and increased from the mid to lower slope positions for both convex and concave slopes. The thin solum at the upper positions and concave slope demonstrated that pedogenetic development is least at upper slope positions and concave slope where leaching and biomass productivity are less than at lower slopes and concave slope. A large decrease in the thickness of O and A horizons from the summit to back slope was noted for both concave and convex slopes, but it increased from back slope toward down slope for both of them. The average thickness of B horizons increased from summit to down slopes in the case of the concave slope, but in the case of convex slope it decreased from summit to shoulder and afterwards it increased to the down slope. The thicknesses of the different horizons varied in part in the different positions and shape slopes because they had different plant species cover and soil features, which were related to topography.

  10. An accelerated proximal augmented Lagrangian method and its application in compressive sensing.

    PubMed

    Sun, Min; Liu, Jing

    2017-01-01

    As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.

  11. Study of foldable elastic tubes for large space structure applications, phase 1

    NASA Technical Reports Server (NTRS)

    Jones, I. W.; Boateng, C.; Williams, C. D.

    1980-01-01

    Structural members that might be suitable for strain energy deployable structures, are discussed with emphasis on a thin-walled cylindrical tube with a cross-section that is called 'bi-convex'. The design of bi-convex tube test specimens and their fabrication are described as well as the design and construction of a special purpose testing machine to determine the deployment characteristics. The results of the first series of tests were quite mixed, but clearly revealed that since most of the specimens failed to deploy completely, due to a buckling problem, this type of tube requires some modification in order to be viable.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaohu; Shi, Di; Wang, Zhiwei

    Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less

  13. Activity recognition using dynamic multiple sensor fusion in body sensor networks.

    PubMed

    Gao, Lei; Bourke, Alan K; Nelson, John

    2012-01-01

    Multiple sensor fusion is a main research direction for activity recognition. However, there are two challenges in those systems: the energy consumption due to the wireless transmission and the classifier design because of the dynamic feature vector. This paper proposes a multi-sensor fusion framework, which consists of the sensor selection module and the hierarchical classifier. The sensor selection module adopts the convex optimization to select the sensor subset in real time. The hierarchical classifier combines the Decision Tree classifier with the Naïve Bayes classifier. The dataset collected from 8 subjects, who performed 8 scenario activities, was used to evaluate the proposed system. The results show that the proposed system can obviously reduce the energy consumption while guaranteeing the recognition accuracy.

  14. Symmetry breaking and the geometry of reduced density matrices

    NASA Astrophysics Data System (ADS)

    Zauner, V.; Draxler, D.; Vanderstraeten, L.; Haegeman, J.; Verstraete, F.

    2016-11-01

    The concept of symmetry breaking and the emergence of corresponding local order parameters constitute the pillars of modern day many body physics. We demonstrate that the existence of symmetry breaking is a consequence of the geometric structure of the convex set of reduced density matrices of all possible many body wavefunctions. The surfaces of these convex bodies exhibit non-analyticities, which signal the emergence of symmetry breaking and of an associated order parameter and also show different characteristics for different types of phase transitions. We illustrate this with three paradigmatic examples of many body systems exhibiting symmetry breaking: the quantum Ising model, the classical q-state Potts model in two-dimensions at finite temperature and the ideal Bose gas in three-dimensions at finite temperature. This state based viewpoint on phase transitions provides a unique novel tool for studying exotic many body phenomena in quantum and classical systems.

  15. Structural Studies on Intact Clostridium botulinum Neurotoxins Complexed with Inhibitors Leading to Drug Design

    DTIC Science & Technology

    2009-02-01

    compounds via virtual screening. These compounds include small molecules – transition state analogues and benzimidazoles . Since there is a commonality in...Crystal structure of BoNT/E has been determined helping us to understand the faster action of BoNT/E compared to BoNT/A. • A subset of benzimidazole

  16. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  17. Convex Lattice Polygons

    ERIC Educational Resources Information Center

    Scott, Paul

    2006-01-01

    A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.

  18. High numerical aperture projection system for extreme ultraviolet projection lithography

    DOEpatents

    Hudyma, Russell M.

    2000-01-01

    An optical system is described that is compatible with extreme ultraviolet radiation and comprises five reflective elements for projecting a mask image onto a substrate. The five optical elements are characterized in order from object to image as concave, convex, concave, convex, and concave mirrors. The optical system is particularly suited for ring field, step and scan lithography methods. The invention uses aspheric mirrors to minimize static distortion and balance the static distortion across the ring field width which effectively minimizes dynamic distortion. The present invention allows for higher device density because the optical system has improved resolution that results from the high numerical aperture, which is at least 0.14.

  19. Mapping tropical rainforest canopies using multi-temporal spaceborne imaging spectroscopy

    NASA Astrophysics Data System (ADS)

    Somers, Ben; Asner, Gregory P.

    2013-10-01

    The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among coexisting species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection strategy in MESMA. Instead of using the same spectral subset to unmix each image pixel, our modified approach allowed the spectral subsets to vary on a per pixel basis such that each pixel is evaluated using a spectral subset tuned towards maximal separability of its specific endmember class combination or species mixture. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively demonstrated using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, our approach provided a more accurate tree species map compared to MESMA (Kappa = 0.54). In addition, by the selection of spectral subsets our approach was about 90% faster than MESMA. The flexible or adaptive use of band sets in spectral unmixing as such provides an interesting avenue to address spectral similarities in complex vegetation canopies.

  20. Effect of Cytomegalovirus Co-Infection on Normalization of Selected T-Cell Subsets in Children with Perinatally Acquired HIV Infection Treated with Combination Antiretroviral Therapy

    PubMed Central

    Kapetanovic, Suad; Aaron, Lisa; Montepiedra, Grace; Anthony, Patricia; Thuvamontolrat, Kasalyn; Pahwa, Savita; Burchett, Sandra; Weinberg, Adriana; Kovacs, Andrea

    2015-01-01

    Background We examined the effect of cytomegalovirus (CMV) co-infection and viremia on reconstitution of selected CD4+ and CD8+ T-cell subsets in perinatally HIV-infected (PHIV+) children ≥ 1-year old who participated in a partially randomized, open-label, 96-week combination antiretroviral therapy (cART)-algorithm study. Methods Participants were categorized as CMV-naïve, CMV-positive (CMV+) viremic, and CMV+ aviremic, based on blood, urine, or throat culture, CMV IgG and DNA polymerase chain reaction measured at baseline. At weeks 0, 12, 20 and 40, T-cell subsets including naïve (CD62L+CD45RA+; CD95-CD28+), activated (CD38+HLA-DR+) and terminally differentiated (CD62L-CD45RA+; CD95+CD28-) CD4+ and CD8+ T-cells were measured by flow cytometry. Results Of the 107 participants included in the analysis, 14% were CMV+ viremic; 49% CMV+ aviremic; 37% CMV-naïve. In longitudinal adjusted models, compared with CMV+ status, baseline CMV-naïve status was significantly associated with faster recovery of CD8+CD62L+CD45RA+% and CD8+CD95-CD28+% and faster decrease of CD8+CD95+CD28-%, independent of HIV VL response to treatment, cART regimen and baseline CD4%. Surprisingly, CMV status did not have a significant impact on longitudinal trends in CD8+CD38+HLA-DR+%. CMV status did not have a significant impact on any CD4+ T-cell subsets. Conclusions In this cohort of PHIV+ children, the normalization of naïve and terminally differentiated CD8+ T-cell subsets in response to cART was detrimentally affected by the presence of CMV co-infection. These findings may have implications for adjunctive treatment strategies targeting CMV co-infection in PHIV+ children, especially those that are now adults or reaching young adulthood and may have accelerated immunologic aging, increased opportunistic infections and aging diseases of the immune system. PMID:25794163

  1. The effect of perceptual grouping on haptic numerosity perception.

    PubMed

    Verlaers, K; Wagemans, J; Overvliet, K E

    2015-01-01

    We used a haptic enumeration task to investigate whether enumeration can be facilitated by perceptual grouping in the haptic modality. Eight participants were asked to count tangible dots as quickly and accurately as possible, while moving their finger pad over a tactile display. In Experiment 1, we manipulated the number and organization of the dots, while keeping the total exploration area constant. The dots were either evenly distributed on a horizontal line (baseline condition) or organized into groups based on either proximity (dots placed in closer proximity to each other) or configural cues (dots placed in a geometric configuration). In Experiment 2, we varied the distance between the subsets of dots. We hypothesized that when subsets of dots can be grouped together, the enumeration time will be shorter and accuracy will be higher than in the baseline condition. The results of both experiments showed faster enumeration for the configural condition than for the baseline condition, indicating that configural grouping also facilitates haptic enumeration. In Experiment 2, faster enumeration was also observed for the proximity condition than for the baseline condition. Thus, perceptual grouping speeds up haptic enumeration by both configural and proximity cues, suggesting that similar mechanisms underlie perceptual grouping in both visual and haptic enumeration.

  2. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    NASA Astrophysics Data System (ADS)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  3. The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface

    NASA Astrophysics Data System (ADS)

    Klass, E. V.

    2017-12-01

    The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.

  4. Reentry trajectory optimization with waypoint and no-fly zone constraints using multiphase convex programming

    NASA Astrophysics Data System (ADS)

    Zhao, Dang-Jun; Song, Zheng-Yu

    2017-08-01

    This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.

  5. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  6. A Novel Method of Aircraft Detection Based on High-Resolution Panchromatic Optical Remote Sensing Images.

    PubMed

    Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu

    2017-05-06

    In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.

  7. CONVEX mini manual

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.

  8. Processing convexity and concavity along a 2-D contour: figure-ground, structural shape, and attention.

    PubMed

    Bertamini, Marco; Wagemans, Johan

    2013-04-01

    Interest in convexity has a long history in vision science. For smooth contours in an image, it is possible to code regions of positive (convex) and negative (concave) curvature, and this provides useful information about solid shape. We review a large body of evidence on the role of this information in perception of shape and in attention. This includes evidence from behavioral, neurophysiological, imaging, and developmental studies. A review is necessary to analyze the evidence on how convexity affects (1) separation between figure and ground, (2) part structure, and (3) attention allocation. Despite some broad agreement on the importance of convexity in these areas, there is a lack of consensus on the interpretation of specific claims--for example, on the contribution of convexity to metric depth and on the automatic directing of attention to convexities or to concavities. The focus is on convexity and concavity along a 2-D contour, not convexity and concavity in 3-D, but the important link between the two is discussed. We conclude that there is good evidence for the role of convexity information in figure-ground organization and in parsing, but other, more specific claims are not (yet) well supported.

  9. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  10. Mathematical analysis on the cosets of subgroup in the group of E-convex sets

    NASA Astrophysics Data System (ADS)

    Abbas, Nada Mohammed; Ajeena, Ruma Kareem K.

    2018-05-01

    In this work, analyzing the cosets of the subgroup in the group of L – convex sets is presented as a new and powerful tool in the topics of the convex analysis and abstract algebra. On L – convex sets, the properties of these cosets are proved mathematically. Most important theorem on a finite group of L – convex sets theory which is the Lagrange’s Theorem has been proved. As well as, the mathematical proof of the quotient group of L – convex sets is presented.

  11. Lateral facial profile may reveal the risk for sleep disordered breathing in children--the PANIC-study.

    PubMed

    Ikävalko, Tiina; Närhi, Matti; Lakka, Timo; Myllykangas, Riitta; Tuomilehto, Henri; Vierola, Anu; Pahkala, Riitta

    2015-01-01

    To evaluate the lateral view photography of the face as a tool for assessing morphological properties (i.e. facial convexity) as a risk factor for sleep disordered breathing (SDB) in children and to test how reliably oral health and non-oral healthcare professionals can visually discern the lateral profile of the face from the photographs. The present study sample consisted of 382 children 6-8 years of age who were participants in the Physical Activity and Nutrition in Children (PANIC) Study. Sleep was assessed by a sleep questionnaire administered by the parents. SDB was defined as apnoeas, frequent or loud snoring or nocturnal mouth breathing observed by the parents. The facial convexity was assessed with three different methods. First, it was clinically evaluated by the reference orthodontist (T.I.). Second, lateral view photographs were taken to visually sub-divide the facial profile into convex, normal or concave. The photos were examined by a reference orthodontist and seven different healthcare professionals who work with children and also by a dental student. The inter- and intra-examiner consistencies were calculated by Kappa statistics. Three soft tissue landmarks of the facial profile, soft tissue Glabella (G`), Subnasale (Sn) and soft tissue Pogonion (Pg`) were digitally identified to analyze convexity of the face and the intra-examiner reproducibility of the reference orthodontist was determined by calculating intra-class correlation coefficients (ICCs). The third way to express the convexity of the face was to calculate the angle of facial convexity (G`-Sn-Pg`) and to group it into quintiles. For analysis the lowest quintile (≤164.2°) was set to represent the most convex facial profile. The prevalence of the SDB in children with the most convex profiles expressed with the lowest quintile of the angle G`-Sn-Pg` (≤164.2°) was almost 2-fold (14.5%) compared to those with normal profile (8.1%) (p = 0.084). The inter-examiner Kappa values between the reference orthodontist and the other examiners for visually assessing the facial profile with the photographs ranged from poor-to-moderate (0.000-0.579). The best Kappa values were achieved between the two orthodontists (0.579). The intra-examiner Kappa value of the reference orthodontist for assessing the profiles was 0.920, with the agreement of 93.3%. In the ICC and its 95% CI between the two digital measurements, the angles of convexity of the facial profile (G`-Sn-Pg`) of the reference orthodontist were 0.980 and 0.951-0.992. In addition to orthodontists, it would be advantageous if also other healthcare professionals could play a key role in identifying certain risk features for SDB. However, the present results indicate that, in order to recognize the morphological risk for SDB, one would need to be trained for the purpose and, as well, needs sufficient knowledge of the growth and development of the face.

  12. Strategic larval decision-making in a bivoltine butterfly.

    PubMed

    Friberg, Magne; Dahlerus, Josefin; Wiklund, Christer

    2012-07-01

    In temperate areas, insect larvae must decide between entering winter diapause or developing directly and reproducing in the same season. Long daylength and high temperature promote direct development, which is generally associated with a higher growth rate. In this work, we investigated whether the larval pathway decision precedes the adjustment of growth rate (state-independent), or whether the pathway decision is conditional on the individual's growth rate (state-dependent), in the butterfly Pieris napi. This species typically makes the pathway decision in the penultimate instar. We measured growth rate throughout larval development under two daylengths: slightly shorter and slightly longer than the critical daylength. Results indicate that the pathway decision can be both state-independent and state-dependent; under the shorter daylength condition, most larvae entered diapause, and direct development was chosen exclusively by a small subset of larvae showing the highest growth rates already in the early instars; under the longer daylength condition, most larvae developed directly, and the diapause pathway was chosen exclusively by a small subset of slow-growing individuals. Among the remainder, the choice of pathway was independent of the early growth rate; larvae entering diapause under the short daylength grew as fast as or faster than the direct developers under the longer daylength in the early instars, whereas the direct developers grew faster than the diapausers only in the ultimate instar. Hence, the pathway decision was state-dependent in a subset with a very high or very low growth rate, whereas the decision was state-independent in the majority of the larvae, which made the growth rate adjustment downstream from the pathway decision.

  13. Asymptotically extremal polynomials with respect to varying weights and application to Sobolev orthogonality

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2008-10-01

    We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.

  14. Bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations

    DOE PAGES

    Azunre, P.

    2016-09-21

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  15. Effects of orientation and downward-facing convex curvature on pool-boiling critical heat flux

    NASA Astrophysics Data System (ADS)

    Howard, Alicia Ann Harris

    Photographic studies of near-saturated pool boiling on both inclined flat surfaces and a downward-facing convex surface were conducted in order to determine the physical mechanisms that trigger critical heat flux (CHF). Based on the vapor behavior observed just prior to CHF, it is shown for the flat surfaces that the surface orientations can be divided into three regions: upward-facing (0-60°), near-vertical (60-165°), and downward-facing (165-180°) each region is associated with a unique CHIP trigger mechanism. In the upward-facing region, the buoyancy forces remove the vapor vertically off the heater surface. The near- vertical region is characterized by a wavy liquid-vapor interface which sweeps along the heater surface. In the downward-facing region, the vapor repeatedly stratifies on the heater surface, greatly decreasing CHF. The vapor behavior along the convex surface is cyclic in nature and similar to the nucleation/coalescence/stratification/release procedure observed for flat surfaces in the downward-facing region. The vapor stratification occurred at the bottom (downward-facing) heaters on the convex surface. CHF is always triggered on these downward-facing heaters and then propagates up the convex surface, and the orientations of these heaters are comparable with the orientation range of the flat surface downward-facing region. The vast differences between the observed vapor behavior within the three regions and on the convex surface indicate that a single overall pool boiling CHF model cannot possibly account for all the observed effects. Upward-facing surfaces have been examined and modeled extensively by many investigators and a few investigators have addressed downward-facing surfaces, so this investigation focuses on modeling the near-vertical region. The near-vertical CHF model incorporates classical two-dimensional interfacial instability theory, a separated flow model, an energy balance, and a criterion for separation of the wavy interface from the surface at CHF. The model was tested for different fluids and shows good agreement with CHF data. Additionally, the instability theory incorporated into this model accurately predicts the angle of transition between the near-vertical and downward-facing regions.

  16. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  17. Chromatically corrected virtual image visual display. [reducing eye strain in flight simulators

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M., Jr. (Inventor)

    1980-01-01

    An in-line, three element, large diameter, optical display lens is disclosed which has a front convex-convex element, a central convex-concave element, and a rear convex-convex element. The lens, used in flight simulators, magnifies an image presented on a television monitor and, by causing light rays leaving the lens to be in essentially parallel paths, reduces eye strain of the simulator operator.

  18. Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity

    NASA Astrophysics Data System (ADS)

    Briec, Walter; Horvath, Charles

    2008-05-01

    -convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.

  19. Scoliosis convexity and organ anatomy are related.

    PubMed

    Schlösser, Tom P C; Semple, Tom; Carr, Siobhán B; Padley, Simon; Loebinger, Michael R; Hogg, Claire; Castelein, René M

    2017-06-01

    Primary ciliary dyskinesia (PCD) is a respiratory syndrome in which 'random' organ orientation can occur; with approximately 46% of patients developing situs inversus totalis at organogenesis. The aim of this study was to explore the relationship between organ anatomy and curve convexity by studying the prevalence and convexity of idiopathic scoliosis in PCD patients with and without situs inversus. Chest radiographs of PCD patients were systematically screened for existence of significant lateral spinal deviation using the Cobb angle. Positive values represented right-sided convexity. Curve convexity and Cobb angles were compared between PCD patients with situs inversus and normal anatomy. A total of 198 PCD patients were screened. The prevalence of scoliosis (Cobb >10°) and significant spinal asymmetry (Cobb 5-10°) was 8 and 23%, respectively. Curve convexity and Cobb angle were significantly different within both groups between situs inversus patients and patients with normal anatomy (P ≤ 0.009). Moreover, curve convexity correlated significantly with organ orientation (P < 0.001; ϕ = 0.882): In 16 PCD patients with scoliosis (8 situs inversus and 8 normal anatomy), except for one case, matching of curve convexity and orientation of organ anatomy was observed: convexity of the curve was opposite to organ orientation. This study supports our hypothesis on the correlation between organ anatomy and curve convexity in scoliosis: the convexity of the thoracic curve is predominantly to the right in PCD patients that were 'randomized' to normal organ anatomy and to the left in patients with situs inversus totalis.

  20. Use of Convexity in Ostomy Care

    PubMed Central

    Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel

    2017-01-01

    Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes. PMID:28002174

  1. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  2. The nucleolus is well-posed

    NASA Astrophysics Data System (ADS)

    Fragnelli, Vito; Patrone, Fioravante; Torre, Anna

    2006-02-01

    The lexicographic order is not representable by a real-valued function, contrary to many other orders or preorders. So, standard tools and results for well-posed minimum problems cannot be used. We prove that under suitable hypotheses it is however possible to guarantee the well-posedness of a lexicographic minimum over a compact or convex set. This result allows us to prove that some game theoretical solution concepts, based on lexicographic order are well-posed: in particular, this is true for the nucleolus.

  3. Higher order solution of the Euler equations on unstructured grids using quadratic reconstruction

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Frederickson, Paul O.

    1990-01-01

    High order accurate finite-volume schemes for solving the Euler equations of gasdynamics are developed. Central to the development of these methods are the construction of a k-exact reconstruction operator given cell-averaged quantities and the use of high order flux quadrature formulas. General polygonal control volumes (with curved boundary edges) are considered. The formulations presented make no explicit assumption as to complexity or convexity of control volumes. Numerical examples are presented for Ringleb flow to validate the methodology.

  4. Geometric convex cone volume analysis

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Chang, Chein-I.

    2016-05-01

    Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.

  5. Revisiting separation properties of convex fuzzy sets

    USDA-ARS?s Scientific Manuscript database

    Separation of convex sets by hyperplanes has been extensively studied on crisp sets. In a seminal paper separability and convexity are investigated, however there is a flaw on the definition of degree of separation. We revisited separation on convex fuzzy sets that have level-wise (crisp) disjointne...

  6. An extended UTD analysis for the scattering and diffraction from cubic polynomial strips

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    Spline and polynomial type surfaces are commonly used in high frequency modeling of complex structures such as aircraft, ships, reflectors, etc. It is therefore of interest to develop an efficient and accurate solution to describe the scattered fields from such surfaces. An extended Uniform Geometrical Theory of Diffraction (UTD) solution for the scattering and diffraction from perfectly conducting cubic polynomial strips is derived and involves the incomplete Airy integrals as canonical functions. This new solution is universal in nature and can be used to effectively describe the scattered fields from flat, strictly concave or convex, and concave convex boundaries containing edges. The classic UTD solution fails to describe the more complicated field behavior associated with higher order phase catastrophes and therefore a new set of uniform reflection and first-order edge diffraction coefficients is derived. Also, an additional diffraction coefficient associated with a zero-curvature (inflection) point is presented. Higher order effects such as double edge diffraction, creeping waves, and whispering gallery modes are not examined. The extended UTD solution is independent of the scatterer size and also provides useful physical insight into the various scattering and diffraction processes. Its accuracy is confirmed via comparison with some reference moment method results.

  7. Use of Convexity in Ostomy Care: Results of an International Consensus Meeting.

    PubMed

    Hoeflok, Jo; Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel

    Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes.

  8. Detection of Convexity and Concavity in Context

    ERIC Educational Resources Information Center

    Bertamini, Marco

    2008-01-01

    Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…

  9. Operand-order effect in multiplication and addition: the long-term effects of reorganization processand acquisition sequence.

    PubMed

    Didino, Daniele; Lombardi, Luigi; Vespignani, Francesco

    2014-01-01

    Butterworth, Marchesini, and Girelli (2003) showed that children solved multiplications faster when the larger operand was first (e.g., 5 · 2) than when the smaller operand was first (e.g., 2 · 5). This result was interpreted according to the reorganization hypothesis, which states that, as children begin to switch from counting-based strategies (e.g., repeated additions) to direct retrieval, non-retrieval strategies generate an advantage for the larger-operand-first order. In two experiments we showed that order preferences also persist into adulthood. With additions, the larger-operand-first order was solved faster than the inverse order. With multiplications we obtained a novel result: Largeroperand-first problems were solved faster when at least one operand was smaller than 5, whereas smaller-operand-first problems were solved faster when both operands were larger than 5. Since the reorganization process alone cannot explain our results, we propose that order preferences are also influenced by the sequence in which the members of a commuted pair are acquired.

  10. Airfoil

    DOEpatents

    Ristau, Neil; Siden, Gunnar Leif

    2015-07-21

    An airfoil includes a leading edge, a trailing edge downstream from the leading edge, a pressure surface between the leading and trailing edges, and a suction surface between the leading and trailing edges and opposite the pressure surface. A first convex section on the suction surface decreases in curvature downstream from the leading edge, and a throat on the suction surface is downstream from the first convex section. A second convex section is on the suction surface downstream from the throat, and a first convex segment of the second convex section increases in curvature.

  11. Wave maps from Gödel's universe

    NASA Astrophysics Data System (ADS)

    Barletta, Elisabetta; Dragomir, Sorin; Magliaro, Marco

    2014-10-01

    Using a result by Koch (1988 Trans. Am. Math. Soc. 307 827-41) we realize Gödel's universe G_{α }^{4}=({{{R}}^{4}},{{g}_{α}}) as the total space of a principal {R}-bundle over a strictly pseudo-convex CR manifold M3 and exploit the analogy between {{g}_{Yalpha;}} and Fefferman's metric {{F}_{θ}} (Fefferman 1976 Ann. Math. 103 395-416 104 393-4) to show that for any {R}-invariant wave map Φ of G_{α}^{4} into a Riemannian manifold N, the corresponding base map φ :{{M}^{3}}\\to N is subelliptic harmonic, with respect to a canonical choice of contact form θ on M3. We show that the subelliptic Jacobi operator J_{b}^{φ} of ϕ has a discrete Dirichlet spectrum on any bounded domain D\\subset {{M}^{3}} supporting the Poincaré inequality on \\mathop{W}\\limits^{\\circ }{}_{H}^{1,2}(D,{{φ}^{-1}}TN) and Kondrakov compactness, i.e. compactness of the embedding \\mathop{W}\\limits^{\\circ }{}_{H}^{1,2}(D,{{φ }^{-1}}TN)\\hookrightarrow {{L}^{2}}(D,{{φ}^{-1}}TN). We exhibit an explicit solution π :G_{α}^{4}\\to {{M}^{3}} to the wave map system on G_{α}^{4}, of index in{{d}^{Ω}}(π)\\geqslant 1 for any bounded domain Ω \\subset G_{α}^{4}. Mounoud's distance (Mounoud 2001 Differ. Geom. Appl. 15 47-57) d_{{{G}_{0}}, Ω }^{∞}({{g}_{α }}, {{F}_{θ}}) is bounded below by a constant depending only on the rotation frequency of Gödel's universe, thus giving a measure of the bias of {{g}_{α}} from being Fefferman like in the region Ω \\subset {{{R}}^{4}}.

  12. Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr

    2016-04-18

    The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.

  13. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    NASA Astrophysics Data System (ADS)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  14. A Bayesian observer replicates convexity context effects in figure-ground perception.

    PubMed

    Goldreich, Daniel; Peterson, Mary A

    2012-01-01

    Peterson and Salvagio (2008) demonstrated convexity context effects in figure-ground perception. Subjects shown displays consisting of unfamiliar alternating convex and concave regions identified the convex regions as foreground objects progressively more frequently as the number of regions increased; this occurred only when the concave regions were homogeneously colored. The origins of these effects have been unclear. Here, we present a two-free-parameter Bayesian observer that replicates convexity context effects. The Bayesian observer incorporates two plausible expectations regarding three-dimensional scenes: (1) objects tend to be convex rather than concave, and (2) backgrounds tend (more than foreground objects) to be homogeneously colored. The Bayesian observer estimates the probability that a depicted scene is three-dimensional, and that the convex regions are figures. It responds stochastically by sampling from its posterior distributions. Like human observers, the Bayesian observer shows convexity context effects only for images with homogeneously colored concave regions. With optimal parameter settings, it performs similarly to the average human subject on the four display types tested. We propose that object convexity and background color homogeneity are environmental regularities exploited by human visual perception; vision achieves figure-ground perception by interpreting ambiguous images in light of these and other expected regularities in natural scenes.

  15. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  16. FLASH_SSF_Aqua-FM3-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  17. FLASH_SSF_Terra-FM1-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infrared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  18. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES

    PubMed Central

    GILLETTE, ANDREW; RAND, ALEXANDER; BAJAJ, CHANDRAJIT

    2016-01-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties. PMID:28077939

  19. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES.

    PubMed

    Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

    2016-10-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties.

  20. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  1. Photovoltaic Inverter Controllers Seeking AC Optimal Power Flow Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    This paper considers future distribution networks featuring inverter-interfaced photovoltaic (PV) systems, and addresses the synthesis of feedback controllers that seek real- and reactive-power inverter setpoints corresponding to AC optimal power flow (OPF) solutions. The objective is to bridge the temporal gap between long-term system optimization and real-time inverter control, and enable seamless PV-owner participation without compromising system efficiency and stability. The design of the controllers is grounded on a dual ..epsilon..-subgradient method, while semidefinite programming relaxations are advocated to bypass the non-convexity of AC OPF formulations. Global convergence of inverter output powers is analytically established for diminishing stepsize rules formore » cases where: i) computational limits dictate asynchronous updates of the controller signals, and ii) inverter reference inputs may be updated at a faster rate than the power-output settling time.« less

  2. An exact general remeshing scheme applied to physically conservative voxelization

    DOE PAGES

    Powell, Devon; Abel, Tom

    2015-05-21

    We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themore » corresponding integral over the output mesh. We refer to this as “physically conservative voxelization.” At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.« less

  3. A Novel Method of Aircraft Detection Based on High-Resolution Panchromatic Optical Remote Sensing Images

    PubMed Central

    Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu

    2017-01-01

    In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu’s algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape. PMID:28481260

  4. Impact of cell shape in hierarchically structured plant surfaces on the attachment of male Colorado potato beetles (Leptinotarsa decemlineata)

    PubMed Central

    Seidel, Robin; Bohn, Holger Florian; Speck, Thomas

    2012-01-01

    Summary Plant surfaces showing hierarchical structuring are frequently found in plant organs such as leaves, petals, fruits and stems. In our study we focus on the level of cell shape and on the level of superimposed microstructuring, leading to hierarchical surfaces if both levels are present. While it has been shown that epicuticular wax crystals and cuticular folds strongly reduce insect attachment, and that smooth papillate epidermal cells in petals improve the grip of pollinators, the impact of hierarchical surface structuring of plant surfaces possessing convex or papillate cells on insect attachment remains unclear. We performed traction experiments with male Colorado potato beetles on nine different plant surfaces with different structures. The selected plant surfaces showed epidermal cells with either tabular, convex or papillate cell shape, covered either with flat films of wax, epicuticular wax crystals or with cuticular folds. On surfaces possessing either superimposed wax crystals or cuticular folds we found traction forces to be almost one order of magnitude lower than on surfaces covered only with flat films of wax. Independent of superimposed microstructures we found that convex and papillate epidermal cell shapes slightly enhance the attachment ability of the beetles. Thus, in plant surfaces, cell shape and superimposed microstructuring yield contrary effects on the attachment of the Colorado potato beetle, with convex or papillate cells enhancing attachment and both wax crystals or cuticular folds reducing attachment. However, the overall magnitude of traction force mainly depends on the presence or absence of superimposed microstructuring. PMID:22428097

  5. The role of convexity in perception of symmetry and in visual short-term memory.

    PubMed

    Bertamini, Marco; Helmy, Mai Salah; Hulleman, Johan

    2013-01-01

    Visual perception of shape is affected by coding of local convexities and concavities. For instance, a recent study reported that deviations from symmetry carried by convexities were easier to detect than deviations carried by concavities. We removed some confounds and extended this work from a detection of reflection of a contour (i.e., bilateral symmetry), to a detection of repetition of a contour (i.e., translational symmetry). We tested whether any convexity advantage is specific to bilateral symmetry in a two-interval (Experiment 1) and a single-interval (Experiment 2) detection task. In both, we found a convexity advantage only for repetition. When we removed the need to choose which region of the contour to monitor (Experiment 3) the effect disappeared. In a second series of studies, we again used shapes with multiple convex or concave features. Participants performed a change detection task in which only one of the features could change. We did not find any evidence that convexities are special in visual short-term memory, when the to-be-remembered features only changed shape (Experiment 4), when they changed shape and changed from concave to convex and vice versa (Experiment 5), or when these conditions were mixed (Experiment 6). We did find a small advantage for coding convexity as well as concavity over an isolated (and thus ambiguous) contour. The latter is consistent with the known effect of closure on processing of shape. We conclude that convexity plays a role in many perceptual tasks but that it does not have a basic encoding advantage over concavity.

  6. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.

  7. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

    PubMed

    Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka

    2015-01-01

    Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.

  9. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  10. Multifunctional cell therapeutics with plasmonic nanobubbles

    NASA Astrophysics Data System (ADS)

    Lukianova-Hleb, Ekaterina Y.; Kashinath, Shruti; Lapotko, Dmitri O.

    2012-03-01

    We report our new discovery of the nanophenomenon called plasmonic nanobubbles to devise faster, safer and more accurate ways of manipulating the components of human tissue grafts. The reported work facilitates future cell and gene therapies by allowing specific cell subsets to be positively or negatively selected for culture, genetic engineering or elimination. The technology will have application for a wide range of human tissues that can be used to treat a multiplicity of human diseases.

  11. First Evaluation of the New Thin Convex Probe Endobronchial Ultrasound Scope: A Human Ex Vivo Lung Study.

    PubMed

    Patel, Priya; Wada, Hironobu; Hu, Hsin-Pei; Hirohashi, Kentaro; Kato, Tatsuya; Ujiie, Hideki; Ahn, Jin Young; Lee, Daiyoon; Geddie, William; Yasufuku, Kazuhiro

    2017-04-01

    Endobronchial ultrasonography (EBUS)-guided transbronchial needle aspiration allows for sampling of mediastinal lymph nodes. The external diameter, rigidity, and angulation of the convex probe EBUS renders limited accessibility. This study compares the accessibility and transbronchial needle aspiration capability of the prototype thin convex probe EBUS against the convex probe EBUS in human ex vivo lungs rejected for transplant. The prototype thin convex probe EBUS (BF-Y0055; Olympus, Tokyo, Japan) with a thinner tip (5.9 mm), greater upward angle (170 degrees), and decreased forward oblique direction of view (20 degrees) was compared with the current convex probe EBUS (6.9-mm tip, 120 degrees, and 35 degrees, respectively). Accessibility and transbronchial needle aspiration capability was assessed in ex vivo human lungs declined for lung transplant. The distance of maximum reach and sustainable endoscopic limit were measured. Transbronchial needle aspiration capability was assessed using the prototype 25G aspiration needle in segmental lymph nodes. In all evaluated lungs (n = 5), the thin convex probe EBUS demonstrated greater reach and a higher success rate, averaging 22.1 mm greater maximum reach and 10.3 mm further endoscopic visibility range than convex probe EBUS, and could assess selectively almost all segmental bronchi (98% right, 91% left), demonstrating nearly twice the accessibility as the convex probe EBUS (48% right, 47% left). The prototype successfully enabled cytologic assessment of subsegmental lymph nodes with adequate quality using the dedicated 25G aspiration needle. Thin convex probe EBUS has greater accessibility to peripheral airways in human lungs and is capable of sampling segmental lymph nodes using the aspiration needle. That will allow for more precise assessment of N1 nodes and, possibly, intrapulmonary lesions normally inaccessible to the conventional convex probe EBUS. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Swim drink study: a randomised controlled trial of during-exercise rehydration and swimming performance.

    PubMed

    Briars, Graham L; Gordon, Gillian Suzanne; Lawrence, Andrew; Turner, Andrew; Perry, Sharon; Pillbrow, Dan; Walston, Florence Einstein; Molyneux, Paul

    2017-01-01

    To determine whether during-exercise rehydration improves swimming performance and whether sports drink or water have differential effects on performance. Randomised controlled multiple crossover trial. A UK competitive swimming club. 19 club-level competitive swimmers, median age (range) 13 (11-17) years. Subjects were scheduled to drink ad libitum commercial isotonic sports drink (3.9 g sugars and 0.13 g salt per 100 mL) or water (three sessions each) or no drink (six sessions) in the course of twelve 75 min training sessions, each of which was followed by a 30 min test set of ten 100 m maximum-effort freestyle sprints each starting at 3 min intervals. Times for the middle 50 m of each sprint measured using electronic timing equipment in a Federation Internationale de Natation (FINA)-compliant six-lane 25 m competition swimming pool. Software-generated individual random session order in sealed envelopes. Analysis subset of eight sessions randomly selected by software after data collection completed. Participants blind to drink allocation until session start. In the analysis data set of 1118 swims, there was no significant difference between swim times for drinking and not drinking nor between drinking water or a sports drink. Mean (SEM) 50 m time for no-drink swims was 38.077 (0.128) s and 38.105 (0.131) s for drink swims, p=0.701. Mean 50 m times were 38.031 (0.184) s for drinking sports drink and 38.182 (0.186) s for drinking water, p=0.073. Times after not drinking were 0.027 s faster than after drinking (95% CI 0.186 s faster to 0.113 s slower). Times after drinking sports drink were 0.151 s faster than after water (95% CI 0.309 s faster to 0.002 s slower). Mean (SEM) dehydration from exercise was 0.42 (0.11)%. Drinking water or sports drink over 105 min of sustained effort swimming training does not improve swimming performance. ISRCTN: 49860006.

  13. Ordered mapping of 3 alphoid DNA subsets on human chromosome 22

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonacci, R.; Baldini, A.; Archidiacono, N.

    1994-09-01

    Alpha satellite DNA consists of tandemly repeated monomers of 171 bp clustered in the centromeric region of primate chromosomes. Sequence divergence between subsets located in different human chromosomes is usually high enough to ensure chromosome-specific hybridization. Alphoid probes specific for almost every human chromosome have been reported. A single chromosome can carry different subsets of alphoid DNA and some alphoid subsets can be shared by different chromosomes. We report the physical order of three alphoid DNA subsets on human chromosome 22 determined by a combination of low and high resolution cytological mapping methods. Results visually demonstrate the presence of threemore » distinct alphoid DNA domains at the centromeric region of chromosome 22. We have measured the interphase distances between the three probes in three-color FISH experiments. Statistical analysis of the results indicated the order of the subsets. Two color experiments on prometaphase chromosomes established the order of the three domains relative to the arms of chromosome 22 and confirmed the results obtained using interphase mapping. This demonstrates the applicability of interphase mapping for alpha satellite DNA orderering. However, in our experiments, interphase mapping did not provide any information about the relationship between extremities of the repeat arrays. This information was gained from extended chromatin hybridization. The extremities of two of the repeat arrays were seen to be almost overlapping whereas the third repeat array was clearly separated from the other two. Our data show the value of extended chromatin hybridization as a complement of other cytological techniques for high resolution mapping of repetitive DNA sequences.« less

  14. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  15. Radius of convexity of a certain class of close-to-convex functions

    NASA Astrophysics Data System (ADS)

    Yahya, Abdullah; Soh, Shaharuddin Cik

    2017-11-01

    In the present paper, we consider and investigate a certain class of close-to-convex functions that defined in the unit disk, U = {z : |z| < 1}, which denotes as Re { ei αz/f '(z ) f (z )-f (-z ) } >δ where |α| < π, cos (α) > δ and 0 δ <1. Furthermore, we obtain preliminary result for bound f'(z) and determine result for radius of convexity.

  16. Convex Graph Invariants

    DTIC Science & Technology

    2010-12-02

    Motzkin, T. and Straus, E. (1965). Maxima for graphs and a new proof of a theorem of Turan . Canad. J. Math. 17 533–540. [33] Rendl, F. and Sotirov, R...Convex Graph Invariants Venkat Chandrasekaran, Pablo A . Parrilo, and Alan S. Willsky ∗ Laboratory for Information and Decision Systems Department of...this paper we study convex graph invariants, which are graph invariants that are convex functions of the adjacency matrix of a graph. Some examples

  17. Allometric relationships between traveltime channel networks, convex hulls, and convexity measures

    NASA Astrophysics Data System (ADS)

    Tay, Lea Tien; Sagar, B. S. Daya; Chuah, Hean Teik

    2006-06-01

    The channel network (S) is a nonconvex set, while its basin [C(S)] is convex. We remove open-end points of the channel connectivity network iteratively to generate a traveltime sequence of networks (Sn). The convex hulls of these traveltime networks provide an interesting topological quantity, which has not been noted thus far. We compute lengths of shrinking traveltime networks L(Sn) and areas of corresponding convex hulls C(Sn), the ratios of which provide convexity measures CM(Sn) of traveltime networks. A statistically significant scaling relationship is found for a model network in the form L(Sn) ˜ A[C(Sn)]0.57. From the plots of the lengths of these traveltime networks and the areas of their corresponding convex hulls as functions of convexity measures, new power law relations are derived. Such relations for a model network are CM(Sn) ˜ ? and CM(Sn) ˜ ?. In addition to the model study, these relations for networks derived from seven subbasins of Cameron Highlands region of Peninsular Malaysia are provided. Further studies are needed on a large number of channel networks of distinct sizes and topologies to understand the relationships of these new exponents with other scaling exponents that define the scaling structure of river networks.

  18. Large aluminium convex mirror for the cryo-optical test of the Planck primary reflector

    NASA Astrophysics Data System (ADS)

    Gloesener, P.; Flébus, C.; Cola, M.; Roose, S.; Stockman, Y.; de Chambure, D.

    2017-11-01

    In the frame of the PLANCK mission telescope development, it is requested to measure the reflector changes of the surface figure error (SFE) with respect to the best ellipsoid, between 293 K and 50 K, with 1 μm RMS accuracy. To achieve this, Infra Red interferometry has been selected and a dedicated thermo mechanical set-up has been constructed. In order to realise the test set-up for this reflector, a large aluminium convex mirror with radius of 19500 mm has been manufactured. The mirror has to operate in a cryogenic environment lower than 30 K, and has a contribution to the RMS WFE with less than 1 μm between room temperature and cryogenic temperature. This paper summarises the design, manufacturing and characterisation of this mirror, showing it has fulfilled its requirements.

  19. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    NASA Astrophysics Data System (ADS)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  20. CPU timing routines for a CONVEX C220 computer system

    NASA Technical Reports Server (NTRS)

    Bynum, Mary Ann

    1989-01-01

    The timing routines available on the CONVEX C220 computer system in the Structural Mechanics Division (SMD) at NASA Langley Research Center are examined. The function of the timing routines, the use of the timing routines in sequential, parallel, and vector code, and the interpretation of the results from the timing routines with respect to the CONVEX model of computing are described. The timing routines available on the SMD CONVEX fall into two groups. The first group includes standard timing routines generally available with UNIX 4.3 BSD operating systems, while the second group includes routines unique to the SMD CONVEX. The standard timing routines described in this report are /bin/csh time,/bin/time, etime, and ctime. The routines unique to the SMD CONVEX are getinfo, second, cputime, toc, and a parallel profiling package made up of palprof, palinit, and palsum.

  1. Contextual cueing in multiconjunction visual search is dependent on color- and configuration-based intertrial contingencies.

    PubMed

    Geyer, Thomas; Shi, Zhuanghua; Müller, Hermann J

    2010-06-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times (RTs) were faster when all items in the display appeared at predictive ("old") relative to nonpredictive ("new") locations. However, this RT benefit was smaller compared to when only one set of items, namely that sharing the target's color (but not that in the alternative color) appeared in predictive arrangement. In all conditions, contextual cueing was reliable on both target-present and -absent trials and enhanced if a predictive display was preceded by a predictive (though differently arranged) display, rather than a nonpredictive display. These results suggest that (1) contextual cueing is confined to color subsets of items, that (2) retrieving contextual associations for one color subset of items can be impeded by associations formed within the alternative subset ("contextual interference"), and (3) that contextual cueing is modulated by intertrial priming.

  2. A second order derivative scheme based on Bregman algorithm class

    NASA Astrophysics Data System (ADS)

    Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia

    2016-10-01

    The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.

  3. General and mechanistic optimal relationships for tensile strength of doubly convex tablets under diametrical compression.

    PubMed

    Razavi, Sonia M; Gonzalez, Marcial; Cuitiño, Alberto M

    2015-04-30

    We propose a general framework for determining optimal relationships for tensile strength of doubly convex tablets under diametrical compression. This approach is based on the observation that tensile strength is directly proportional to the breaking force and inversely proportional to a non-linear function of geometric parameters and materials properties. This generalization reduces to the analytical expression commonly used for flat faced tablets, i.e., Hertz solution, and to the empirical relationship currently used in the pharmaceutical industry for convex-faced tablets, i.e., Pitt's equation. Under proper parametrization, optimal tensile strength relationship can be determined from experimental results by minimizing a figure of merit of choice. This optimization is performed under the first-order approximation that a flat faced tablet and a doubly curved tablet have the same tensile strength if they have the same relative density and are made of the same powder, under equivalent manufacturing conditions. Furthermore, we provide a set of recommendations and best practices for assessing the performance of optimal tensile strength relationships in general. Based on these guidelines, we identify two new models, namely the general and mechanistic models, which are effective and predictive alternatives to the tensile strength relationship currently used in the pharmaceutical industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  5. Fabrication of micro-lens array on convex surface by meaning of micro-milling

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Du, Yunlong; Wang, Bo; Shan, Debin

    2014-08-01

    In order to develop the application of the micro-milling technology, and to fabricate ultra-precision optical surface with complex microstructure, in this paper, the primary experimental research on micro-milling complex microstructure array is carried out. A complex microstructure array surface with vary parameters is designed, and the mathematic model of the surface is set up and simulated. For the fabrication of the designed microstructure array surface, a micro three-axis ultra-precision milling machine tool is developed, aerostatic guideway drove directly by linear motor is adopted in order to guarantee the enough stiffness of the machine, and novel numerical control strategy with linear encoders of 5nm resolution used as the feedback of the control system is employed to ensure the extremely high motion control accuracy. With the help of CAD/CAM technology, convex micro lens array on convex spherical surface with different scales on material of polyvinyl chloride (PVC) and pure copper is fabricated using micro tungsten carbide ball end milling tool based on the ultra-precision micro-milling machine. Excellent nanometer-level micro-movement performance of the axis is proved by motion control experiment. The fabrication is nearly as the same as the design, the characteristic scale of the microstructure is less than 200μm and the accuracy is better than 1μm. It prove that ultra-precision micro-milling technology based on micro ultra-precision machine tool is a suitable and optional method for micro manufacture of microstructure array surface on different kinds of materials, and with the development of micro milling cutter, ultraprecision micro-milling complex microstructure surface will be achieved in future.

  6. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  7. Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.

    PubMed

    Wang, Charlie C L; Manocha, Dinesh

    2013-01-01

    We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.

  8. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  9. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  10. CVXPY: A Python-Embedded Modeling Language for Convex Optimization.

    PubMed

    Diamond, Steven; Boyd, Stephen

    2016-04-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.

  11. Usefulness of the convexity apparent hyperperfusion sign in 123I-iodoamphetamine brain perfusion SPECT for the diagnosis of idiopathic normal pressure hydrocephalus.

    PubMed

    Ohmichi, Takuma; Kondo, Masaki; Itsukage, Masahiro; Koizumi, Hidetaka; Matsushima, Shigenori; Kuriyama, Nagato; Ishii, Kazunari; Mori, Etsuro; Yamada, Kei; Mizuno, Toshiki; Tokuda, Takahiko

    2018-03-16

    OBJECTIVE The gold standard for the diagnosis of idiopathic normal pressure hydrocephalus (iNPH) is the CSF removal test. For elderly patients, however, a less invasive diagnostic method is required. On MRI, high-convexity tightness was reported to be an important finding for the diagnosis of iNPH. On SPECT, patients with iNPH often show hyperperfusion of the high-convexity area. The authors tested 2 hypotheses regarding the SPECT finding: 1) it is relative hyperperfusion reflecting the increased gray matter density of the convexity, and 2) it is useful for the diagnosis of iNPH. The authors termed the SPECT finding the convexity apparent hyperperfusion (CAPPAH) sign. METHODS Two clinical studies were conducted. In study 1, SPECT was performed for 20 patients suspected of having iNPH, and regional cerebral blood flow (rCBF) of the high-convexity area was examined using quantitative analysis. Clinical differences between patients with the CAPPAH sign (CAP) and those without it (NCAP) were also compared. In study 2, the CAPPAH sign was retrospectively assessed in 30 patients with iNPH and 19 healthy controls using SPECT images and 3D stereotactic surface projection. RESULTS In study 1, rCBF of the high-convexity area of the CAP group was calculated as 35.2-43.7 ml/min/100 g, which is not higher than normal values of rCBF determined by SPECT. The NCAP group showed lower cognitive function and weaker responses to the removal of CSF than the CAP group. In study 2, the CAPPAH sign was positive only in patients with iNPH (24/30) and not in controls (sensitivity 80%, specificity 100%). The coincidence rate between tight high convexity on MRI and the CAPPAH sign was very high (28/30). CONCLUSIONS Patients with iNPH showed hyperperfusion of the high-convexity area on SPECT; however, the presence of the CAPPAH sign did not indicate real hyperperfusion of rCBF in the high-convexity area. The authors speculated that patients with iNPH without the CAPPAH sign, despite showing tight high convexity on MRI, might have comorbidities such as Alzheimer's disease.

  12. High resolution schemes and the entropy condition

    NASA Technical Reports Server (NTRS)

    Osher, S.; Chakravarthy, S.

    1983-01-01

    A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.

  13. Automated system function allocation and display format: Task information processing requirements

    NASA Technical Reports Server (NTRS)

    Czerwinski, Mary P.

    1993-01-01

    An important consideration when designing the interface to an intelligent system concerns function allocation between the system and the user. The display of information could be held constant, or 'fixed', leaving the user with the task of searching through all of the available information, integrating it, and classifying the data into a known system state. On the other hand, the system, based on its own intelligent diagnosis, could display only relevant information in order to reduce the user's search set. The user would still be left the task of perceiving and integrating the data and classifying it into the appropriate system state. Finally, the system could display the patterns of data. In this scenario, the task of integrating the data is carried out by the system, and the user's information processing load is reduced, leaving only the tasks of perception and classification of the patterns of data. Humans are especially adept at this form of display processing. Although others have examined the relative effectiveness of alphanumeric and graphical display formats, it is interesting to reexamine this issue together with the function allocation problem. Currently, Johnson Space Center is the test site for an intelligent Thermal Control System (TCS), TEXSYS, being tested for use with Space Station Freedom. Expert TCS engineers, as well as novices, were asked to classify several displays of TEXSYS data into various system states (including nominal and anomalous states). Three different display formats were used: fixed, subset, and graphical. The hypothesis tested was that the graphical displays would provide for fewer errors and faster classification times by both experts and novices, regardless of the kind of system state represented within the display. The subset displays were hypothesized to be the second most effective display format/function allocation condition, based on the fact that the search set is reduced in these displays. Both the subset and the graphic display conditions were hypothesized to be processed more efficiently than the fixed display conditions.

  14. Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware

    NASA Astrophysics Data System (ADS)

    Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc

    2007-02-01

    Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.

  15. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less

  16. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less

  17. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  18. Duality of caustics in Minkowski billiards

    NASA Astrophysics Data System (ADS)

    Artstein-Avidan, S.; Florentin, D. I.; Ostrover, Y.; Rosen, D.

    2018-04-01

    In this paper we study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth, centrally symmetric, strictly convex body K, for every convex caustic which K possesses, the ‘dual’ billiard dynamics in which the table is the Euclidean unit ball and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics are dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic.

  19. Multi-Stage Convex Relaxation Methods for Machine Learning

    DTIC Science & Technology

    2013-03-01

    Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.

  20. On approximation and energy estimates for delta 6-convex functions.

    PubMed

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  1. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  2. Assessing the influence of lower facial profile convexity on perceived attractiveness in the orthognathic patient, clinician, and layperson.

    PubMed

    Naini, Farhad B; Donaldson, Ana Nora A; McDonald, Fraser; Cobourne, Martyn T

    2012-09-01

    The aim was a quantitative evaluation of how the severity of lower facial profile convexity influences perceived attractiveness. The lower facial profile of an idealized image was altered incrementally between 14° to -16°. Images were rated on a Likert scale by orthognathic patients, laypeople, and clinicians. Attractiveness ratings were greater for straight profiles in relation to convex/concave, with no significant difference between convex and concave profiles. Ratings decreased by 0.23 of a level for every degree increase in the convexity angle. Class II/III patients gave significantly reduced ratings of attractiveness and had greater desire for surgery than class I. A straight profile is perceived as most attractive and greater degrees of convexity or concavity deemed progressively less attractive, but a range of 10° to -12° may be deemed acceptable; beyond these values surgical correction is desired. Patients are most critical, and clinicians are more critical than laypeople. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. The spectral positioning algorithm of new spectrum vehicle based on convex programming in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Lu, Zhixin

    2017-10-01

    Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azunre, P.

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  5. Improvement on Main/backup Controller Switching Device of the Nozzle Throat Area Control System for a Turbofan Aero Engine

    NASA Astrophysics Data System (ADS)

    Li, Jie; Duan, Minghu; Yan, Maode; Li, Gang; Li, Xiaohui

    2014-06-01

    A full authority digital electronic controller (FADEC) equipped with a full authority hydro-mechanical backup controller (FAHMBC) is adopted as the nozzle throat area control system (NTACS) of a turbofan aero engine. In order to ensure the switching reliability of the main/backup controller, the nozzle throat area control switching valve was improved from three-way convex desktop slide valve to six-way convex desktop slide valve. Simulation results show that, if malfunctions of FAEDC occur and abnormal signals are outputted from FADEC, NTACS will be seriously influenced by the main/backup controller switching in several working states, while NTACS will not be influenced by using the improved nozzle throat area control switching valve, thus the controller switching process will become safer and smoother and the working reliability of this turbofan aero engine is improved by the controller switching device improvement.

  6. Resolvent positive linear operators exhibit the reduction phenomenon

    PubMed Central

    Altenberg, Lee

    2012-01-01

    The spectral bound, s(αA + βV), of a combination of a resolvent positive linear operator A and an operator of multiplication V, was shown by Kato to be convex in . Kato's result is shown here to imply, through an elementary “dual convexity” lemma, that s(αA + βV) is also convex in α > 0, and notably, ∂s(αA + βV)/∂α ≤ s(A). Diffusions typically have s(A) ≤ 0, so that for diffusions with spatially heterogeneous growth or decay rates, greater mixing reduces growth. Models of the evolution of dispersal in particular have found this result when A is a Laplacian or second-order elliptic operator, or a nonlocal diffusion operator, implying selection for reduced dispersal. These cases are shown here to be part of a single, broadly general, “reduction” phenomenon. PMID:22357763

  7. A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation

    PubMed Central

    Zhang, Rui; Zhu, Shiping; Zhou, Qin

    2016-01-01

    Infrared image segmentation is a challenging topic because infrared images are characterized by high noise, low contrast, and weak edges. Active contour models, especially gradient vector flow, have several advantages in terms of infrared image segmentation. However, the GVF (Gradient Vector Flow) model also has some drawbacks including a dilemma between noise smoothing and weak edge protection, which decrease the effect of infrared image segmentation significantly. In order to solve this problem, we propose a novel generalized gradient vector flow snakes model combining GGVF (Generic Gradient Vector Flow) and NBGVF (Normally Biased Gradient Vector Flow) models. We also adopt a new type of coefficients setting in the form of convex function to improve the ability of protecting weak edges while smoothing noises. Experimental results and comparisons against other methods indicate that our proposed snakes model owns better ability in terms of infrared image segmentation than other snakes models. PMID:27775660

  8. Thick lens chromatic effective focal length variation versus bending

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott

    2017-11-01

    Longitudinal chromatic aberration (LCA) can limit the optical performance in refractive optical systems. Understanding a singlet's chromatic change of effective focal leads to insights and methods to control LCA. Long established, first order theory, shows the chromatic change in focal length for a zero thickness lens is proportional to it's focal length divided by the lens V number or inverse dispersion. This work presents the derivation of an equation for a thick singlet's chromatic change in effective focal length as a function of center thickness, t, dispersion, V, index of refraction, n, and the Coddington shape factor, K. A plot of bending versus chromatic focal length variation is presented. Lens thickness does not influence chromatic variation of effective focal length for a convex plano or plano convex lens. A lens's center thickness'influence on chromatic focal length variation is more pronounced for lower indices of refraction.

  9. GPU-based prompt gamma ray imaging from boron neutron capture therapy.

    PubMed

    Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae

    2015-01-01

    The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.

  10. Controlling Laser Spot Size in Outer Space

    NASA Technical Reports Server (NTRS)

    Bennett, Harold E.

    2005-01-01

    Three documents discuss a method of controlling the diameter of a laser beam projected from Earth to any altitude ranging from low orbit around the Earth to geosynchronous orbit. Such laser beams are under consideration as means of supplying power to orbiting spacecraft at levels of the order of tens of kilowatts apiece. Each such beam would be projected by use of a special purpose telescope having an aperture diameter of 15 m or more. Expanding the laser beam to such a large diameter at low altitude would prevent air breakdown and render the laser beam eyesafe. Typically, the telescope would include an adaptive-optics concave primary mirror and a convex secondary mirror. The laser beam transmitted out to the satellite would remain in the near field on the telescope side of the beam waist, so that the telescope focal point would remain effective in controlling the beam width. By use of positioning stages having submicron resolution and repeatability, the relative positions of the primary and secondary mirrors would be adjusted to change the nominal telescope object and image distances to obtain the desired beam diameter (typically about 6 m) at the altitude of the satellite. The limiting distance D(sub L) at which a constant beam diameter can be maintained is determined by the focal range of the telescope 4 lambda f(sup 2) where lambda is the wavelength and f the f/number of the primary mirror. The shorter the wavelength and the faster the mirror, the longer D(sub L) becomes.

  11. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    NASA Astrophysics Data System (ADS)

    Demetriou, I. C.

    2006-04-01

    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics, biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components that give nonnegative second divided differences (convexity) and one separate section of optimal components that give nonpositive second divided differences (concavity). The solution process finds the joint (that is the inflection point estimate of the underlying function) of the sections automatically. The underlying method is iterative, each iteration solving a structured strictly convex quadratic programming problem in order to obtain a convex or a concave section over a subrange of data. Restrictions on the complexity of the problem:Number of data, n, is not limited in the software package, but is limited to 2000 in the main driver. The total work of the method requires 2n-2 structured quadratic programming calculations over subranges of data, which in practice does not exceed the amount of O(n) computer operations. Typical running times:CPU time on a PC with an Intel 733 MHz processor operating in Windows 98: About 2 s to smooth n=1000 noisy measurements that follow the shape of the sine function over one period. Summary:L2CXCV is a package of Fortran 77 subroutines for least squares smoothing to n univariate data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is unknown. The piecewise linear interpolant to the smoothed values gives a convex/concave fit to the data. The underlying algorithm is based on the property that in this best convex/concave fit, the convex and the concave section are both optimal and separate. The algorithm is iterative, each iteration solving a strictly convex quadratic programming problem for the best convex fit to the first k data, starting from the best convex fit to the first k-1 data. By reversing the order and sign of the data, the algorithm obtains the best concave fit to the last n-k data. Then it chooses that k as the optimal position of the required sign change (which defines the inflection point of the fit), if the convex and the concave components to the first k and the last n-k data, respectively, form a convex/concave vector that gives the least sum of squares of residuals. In effect the algorithm requires at most 2n-2 quadratic programming calculations over subranges of data. The package employs a technique for quadratic programming, which takes advantage of a B-spline representation of the smoothed values and makes use of some efficient O(k) updating procedures, where k is the number of data of a subrange. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes that is about n, thus exhibiting quadratic performance in n. The Fortran codes have been designed to minimize the use of computing resources. Attention has been given to computer rounding errors details, which are essential to the robustness of the software package. Numerical examples with output are provided to help the use of the software and exhibit certain features of the method. Distribution material that includes driver programs, technical details of the installation of the package and test examples that demonstrate the use of the software is available in an ASCII file that accompanies this work.

  12. Non-native Speech Perception Training Using Vowel Subsets: Effects of Vowels in Sets and Order of Training

    PubMed Central

    Nishi, Kanae; Kewley-Port, Diane

    2008-01-01

    Purpose Nishi and Kewley-Port (2007) trained Japanese listeners to perceive nine American English monophthongs and showed that a protocol using all nine vowels (fullset) produced better results than the one using only the three more difficult vowels (subset). The present study extended the target population to Koreans and examined whether protocols combining the two stimulus sets would provide more effective training. Method Three groups of five Korean listeners were trained on American English vowels for nine days using one of the three protocols: fullset only, first three days on subset then six days on fullset, or first six days on fullset then three days on subset. Participants' performance was assessed by pre- and post-training tests, as well as by a mid-training test. Results 1) Fullset training was also effective for Koreans; 2) no advantage was found for the two combined protocols over the fullset only protocol, and 3) sustained “non-improvement” was observed for training using one of the combined protocols. Conclusions In using subsets for training American English vowels, care should be taken not only in the selection of subset vowels, but also for the training orders of subsets. PMID:18664694

  13. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  14. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  15. Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.

    PubMed

    Drinkwater, Benjamin; Charleston, Michael A

    2014-01-01

    Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.

  16. Human Umbilical Cord Mesenchymal Stem Cells: Subpopulations and Their Difference in Cell Biology and Effects on Retinal Degeneration in RCS Rats.

    PubMed

    Wang, L; Li, P; Tian, Y; Li, Z; Lian, C; Ou, Q; Jin, C; Gao, F; Xu, J-Y; Wang, J; Wang, F; Zhang, J; Zhang, J; Li, W; Tian, H; Lu, L; Xu, G-T

    2017-01-01

    Human umbilical cord mesenchymal stem cells (hUC-MSCs) are potential candidates for treating retinal degeneration (RD). To further study the biology and therapeutic effects of the hUC-MSCs on retinal degeneration. Two hUC-MSC subpopulations, termed hUC-MSC1 and hUC-MSC2, were isolated by single-cell cloning method and their therapeutic functions were compared in RCS rat, a RD model. Although both subsets satisfied the basic requirements for hUC-MSCs, they were significantly different in morphology, proliferation rate, differentiation capacity, phenotype and gene expression. Furthermore, only the smaller, fibroblast-like, faster growing subset hUC-MSC1 displayed stronger colony forming potential as well as adipogenic and osteogenic differentiation capacities. When the two subsets were respectively transplanted into the subretinal spaces of RCS rats, both subsets survived, but only hUC-MSC1 expressed RPE cell markers Bestrophin and RPE65. More importantly, hUC-MSC1 showed stronger rescue effect on the retinal function as indicated by the higher b-wave amplitude on ERG examination, thicker retinal nuclear layer, and decreased apoptotic photoreceptors. When both subsets were treated with interleukin-6, mimicking the inflammatory environment when the cells were transplanted into the eyes with degenerated retina, hUC-MSC1 expressed much higher levels of trophic factors in comparison with hUC-MSC2. The data here, in addition to prove the heterogeneity of hUC-MSCs, confirmed that the stronger therapeutic effects of hUC-MSC1 were attributed to its stronger anti-apoptotic effect, paracrine of trophic factors and potential RPE cell differentiation capacity. Thus, the subset hUC-MSC1, not the other subset or the ungrouped hUC-MSCs should be used for effective treatment of RD. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Array microscopy technology and its application to digital detection of Mycobacterium tuberculosis

    NASA Astrophysics Data System (ADS)

    McCall, Brian P.

    Tuberculosis causes more deaths worldwide than any other curable infectious disease. This is the case despite tuberculosis appearing to be on the verge of eradication midway through the last century. Efforts at reversing the spread of tuberculosis have intensified since the early 1990s. Since then, microscopy has been the primary frontline diagnostic. In this dissertation, advances in clinical microscopy towards array microscopy for digital detection of Mycobacterium tuberculosis are presented. Digital array microscopy separates the tasks of microscope operation and pathogen detection and will reduce the specialization needed in order to operate the microscope. Distributing the work and reducing specialization will allow this technology to be deployed at the point of care, taking the front-line diagnostic for tuberculosis from the microscopy center to the community health center. By improving access to microscopy centers, hundreds of thousands of lives can be saved. For this dissertation, a lens was designed that can be manufactured as 4x6 array of microscopes. This lens design is diffraction limited, having less than 0.071 waves of aberration (root mean square) over the entire field of view. A total area imaged onto a full-frame digital image sensor is expected to be 3.94 mm2, which according to tuberculosis microscopy guidelines is more than sufficient for a sensitive diagnosis. The design is tolerant to single point diamond turning manufacturing errors, as found by tolerance analysis and by fabricating a prototype. Diamond micro-milling, a fabrication technique for lens array molds, was applied to plastic plano-concave and plano-convex lens arrays, and found to produce high quality optical surfaces. The micro-milling technique did not prove robust enough to produce bi-convex and meniscus lens arrays in a variety of lens shapes, however, and it required lengthy fabrication times. In order to rapidly prototype new lenses, a new diamond machining technique was developed called 4-axis single point diamond machining. This technique is 2-10x faster than micro-milling, depending on how advanced the micro-milling equipment is. With array microscope fabrication still in development, a single prototype of the lens designed for an array microscope was fabricated using single point diamond turning. The prototype microscope objective was validated in a pre-clinical trial. The prototype was compared with a standard clinical microscope objective in diagnostic tests. High concordance, a Fleiss's kappa of 0.88, was found between diagnoses made using the prototype and standard microscope objectives and a reference test. With the lens designed and validated and an advanced fabrication process developed, array microscopy technology is advanced to the point where it is feasible to rapidly prototype an array microscope for detection of tuberculosis and translate array microscope from an innovative concept to a device that can save lives.

  18. Thermophysical properties of hydrogen along the liquid-vapor coexistence

    NASA Astrophysics Data System (ADS)

    Osman, S. M.; Sulaiman, N.; Bahaa Khedr, M.

    2016-05-01

    We present Theoretical Calculations for the Liquid-Vapor Coexistence (LVC) curve of fluid Hydrogen within the first order perturbation theory with a suitable first order quantum correction to the free energy. In the present equation of state, we incorporate the dimerization of H2 molecule by treating the fluid as a hard convex body fluid. The thermophysical properties of fluid H2 along the LVC curve, including the pressure-temperature dependence, density-temperature asymmetry, volume expansivity, entropy and enthalpy, are calculated and compared with computer simulation and empirical results.

  19. The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities

    NASA Astrophysics Data System (ADS)

    Cain, George L., Jr.; González, Luis

    2008-02-01

    The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.

  20. A new adaptively central-upwind sixth-order WENO scheme

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-03-01

    In this paper, we propose a new sixth-order WENO scheme for solving one dimensional hyperbolic conservation laws. The new WENO reconstruction has three properties: (1) it is central in smooth region for low dissipation, and is upwind near discontinuities for numerical stability; (2) it is a convex combination of four linear reconstructions, in which one linear reconstruction is sixth order, and the others are third order; (3) its linear weights can be any positive numbers with requirement that their sum equals one. Furthermore, we propose a simple smoothness indicator for the sixth-order linear reconstruction, this smooth indicator not only can distinguish the smooth region and discontinuities exactly, but also can reduce the computational cost, thus it is more efficient than the classical one.

  1. The Band around a Convex Body

    ERIC Educational Resources Information Center

    Swanson, David

    2011-01-01

    We give elementary proofs of formulas for the area and perimeter of a planar convex body surrounded by a band of uniform thickness. The primary tool is a integral formula for the perimeter of a convex body which describes the perimeter in terms of the projections of the body onto lines in the plane.

  2. Energy access and living standards: some observations on recent trends

    NASA Astrophysics Data System (ADS)

    Rao, Narasimha D.; Pachauri, Shonali

    2017-02-01

    A subset of Sustainable Development Goals pertains to improving people’s living standards at home. These include the provision of access to electricity, clean cooking energy, improved water and sanitation. We examine historical progress in energy access in relation to other living standards. We assess regional patterns in the pace of progress and relative priority accorded to these different services. Countries in sub-Saharan Africa would have to undergo unprecedented rates of improvement in energy access in order to achieve the goal of universal electrification by 2030. World over, access to clean cooking fuels and sanitation facilities consistently lag improved water and electricity access by a large margin. These two deprivations are more concentrated among poor countries, and poor people in middle income countries. They are also correlated to health risks faced disproportionately by women. However, some Asian countries have been able to achieve faster progress in electrification at lower income levels compared to industrialized countries’ earlier efforts. These examples offer hope that future efforts need not be constrained by historical rates of progress.

  3. A path following algorithm for the graph matching problem.

    PubMed

    Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe

    2009-12-01

    We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

  4. Four-mirror extreme ultraviolet (EUV) lithography projection system

    DOEpatents

    Cohen, Simon J; Jeong, Hwan J; Shafer, David R

    2000-01-01

    The invention is directed to a four-mirror catoptric projection system for extreme ultraviolet (EUV) lithography to transfer a pattern from a reflective reticle to a wafer substrate. In order along the light path followed by light from the reticle to the wafer substrate, the system includes a dominantly hyperbolic convex mirror, a dominantly elliptical concave mirror, spherical convex mirror, and spherical concave mirror. The reticle and wafer substrate are positioned along the system's optical axis on opposite sides of the mirrors. The hyperbolic and elliptical mirrors are positioned on the same side of the system's optical axis as the reticle, and are relatively large in diameter as they are positioned on the high magnification side of the system. The hyperbolic and elliptical mirrors are relatively far off the optical axis and hence they have significant aspherical components in their curvatures. The convex spherical mirror is positioned on the optical axis, and has a substantially or perfectly spherical shape. The spherical concave mirror is positioned substantially on the opposite side of the optical axis from the hyperbolic and elliptical mirrors. Because it is positioned off-axis to a degree, the spherical concave mirror has some asphericity to counter aberrations. The spherical concave mirror forms a relatively large, uniform field on the wafer substrate. The mirrors can be tilted or decentered slightly to achieve further increase in the field size.

  5. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  6. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  7. Convergence of generalized MUSCL schemes

    NASA Technical Reports Server (NTRS)

    Osher, S.

    1984-01-01

    Semi-discrete generalizations of the second order extension of Godunov's scheme, known as the MUSCL scheme, are constructed, starting with any three point E scheme. They are used to approximate scalar conservation laws in one space dimension. For convex conservation laws, each member of a wide class is proven to be a convergent approximation to the correct physical solution. Comparison with another class of high resolution convergent schemes is made.

  8. Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis

    PubMed Central

    Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong

    2012-01-01

    Standing posterior–anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI–ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm3 and 256.9 ± 42.6 cm3 at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI–ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. PMID:22133294

  9. Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis.

    PubMed

    Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong

    2012-02-01

    Standing posterior-anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI-ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm(3) and 256.9 ± 42.6 cm(3) at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI-ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society.

  10. On the convexity of ROC curves estimated from radiological test results

    PubMed Central

    Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.

    2010-01-01

    Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155

  11. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  12. Congruency effects in dot comparison tasks: convex hull is more important than dot area.

    PubMed

    Gilmore, Camilla; Cragg, Lucy; Hogan, Grace; Inglis, Matthew

    2016-11-16

    The dot comparison task, in which participants select the more numerous of two dot arrays, has become the predominant method of assessing Approximate Number System (ANS) acuity. Creation of the dot arrays requires the manipulation of visual characteristics, such as dot size and convex hull. For the task to provide a valid measure of ANS acuity, participants must ignore these characteristics and respond on the basis of number. Here, we report two experiments that explore the influence of dot area and convex hull on participants' accuracy on dot comparison tasks. We found that individuals' ability to ignore dot area information increases with age and display time. However, the influence of convex hull information remains stable across development and with additional time. This suggests that convex hull information is more difficult to inhibit when making judgements about numerosity and therefore it is crucial to control this when creating dot comparison tasks.

  13. Space ultra-vacuum facility and method of operation

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J. (Inventor)

    1988-01-01

    A wake shield space processing facility (10) for maintaining ultra-high levels of vacuum is described. The wake shield (12) is a truncated hemispherical section having a convex side (14) and a concave side (24). Material samples (68) to be processed are located on the convex side of the shield, which faces in the wake direction in operation in orbit. Necessary processing fixtures (20) and (22) are also located on the convex side. Support equipment including power supplies (40, 42), CMG package (46) and electronic control package (44) are located on the convex side (24) of the shield facing the ram direction. Prior to operation in orbit the wake shield is oriented in reverse with the convex side facing the ram direction to provide cleaning by exposure to ambient atomic oxygen. The shield is then baked-out by being pointed directed at the sun to obtain heating for a suitable period.

  14. Dissecting the genetic heterogeneity of myopia susceptibility in an Ashkenazi Jewish population using ordered subset analysis

    PubMed Central

    Simpson, Claire L.; Wojciechowski, Robert; Ibay, Grace; Stambolian, Dwight

    2011-01-01

    Purpose Despite many years of research, most of the genetic factors contributing to myopia development remain unknown. Genetic studies have pointed to a strong inherited component, but although many candidate regions have been implicated, few genes have been positively identified. Methods We have previously reported 2 genomewide linkage scans in a population of 63 highly aggregated Ashkenazi Jewish families that identified a locus on chromosome 22. Here we used ordered subset analysis (OSA), conditioned on non-parametric linkage to chromosome 22 to detect other chromosomal regions which had evidence of linkage to myopia in subsets of the families, but not the overall sample. Results Strong evidence of linkage to a 19-cM linkage interval with a peak OSA nonparametric allele-sharing logarithm-of-odds (LOD) score of 3.14 on 20p12-q11.1 (ΔLOD=2.39, empirical p=0.029) was identified in a subset of 20 families that also exhibited strong evidence of linkage to chromosome 22. One other locus also presented with suggestive LOD scores >2.0 on chromosome 11p14-q14 and one locus on chromosome 6q22-q24 had an OSA LOD score=1.76 (ΔLOD=1.65, empirical p=0.02). Conclusions The chromosome 6 and 20 loci are entirely novel and appear linked in a subset of families whose myopia is known to be linked to chromosome 22. The chromosome 11 locus overlaps with the known Myopia-7 (MYP7, OMIM 609256) locus. Using ordered subset analysis allows us to find additional loci linked to myopia in subsets of families, and underlines the complex genetic heterogeneity of myopia even in highly aggregated families and genetically isolated populations such as the Ashkenazi Jews. PMID:21738393

  15. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    PubMed

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Inhibitory competition in figure-ground perception: context and convexity.

    PubMed

    Peterson, Mary A; Salvagio, Elizabeth

    2008-12-15

    Convexity has long been considered a potent cue as to which of two regions on opposite sides of an edge is the shaped figure. Experiment 1 shows that for a single edge, there is only a weak bias toward seeing the figure on the convex side. Experiments 1-3 show that the bias toward seeing the convex side as figure increases as the number of edges delimiting alternating convex and concave regions increases, provided that the concave regions are homogeneous in color. The results of Experiments 2 and 3 rule out a probability summation explanation for these context effects. Taken together, the results of Experiments 1-3 show that the homogeneity versus heterogeneity of the convex regions is irrelevant. Experiment 4 shows that homogeneity of alternating regions is not sufficient for context effects; a cue that favors the perception of the intervening regions as figures is necessary. Thus homogeneity alone does not alone operate as a background cue. We interpret our results within a model of figure-ground perception in which shape properties on opposite sides of an edge compete for representation and the competitive strength of weak competitors is further reduced when they are homogeneous.

  17. Natural-Scene Statistics Predict How the Figure–Ground Cue of Convexity Affects Human Depth Perception

    PubMed Central

    Fowlkes, Charless C.; Banks, Martin S.

    2010-01-01

    The shape of the contour separating two regions strongly influences judgments of which region is “figure” and which is “ground.” Convexity and other figure–ground cues are generally assumed to indicate only which region is nearer, but nothing about how much the regions are separated in depth. To determine the depth information conveyed by convexity, we examined natural scenes and found that depth steps across surfaces with convex silhouettes are likely to be larger than steps across surfaces with concave silhouettes. In a psychophysical experiment, we found that humans exploit this correlation. For a given binocular disparity, observers perceived more depth when the near surface's silhouette was convex rather than concave. We estimated the depth distributions observers used in making those judgments: they were similar to the natural-scene distributions. Our findings show that convexity should be reclassified as a metric depth cue. They also suggest that the dichotomy between metric and nonmetric depth cues is false and that the depth information provided many cues should be evaluated with respect to natural-scene statistics. Finally, the findings provide an explanation for why figure–ground cues modulate the responses of disparity-sensitive cells in visual cortex. PMID:20505093

  18. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  19. An enhanced SOCP-based method for feeder load balancing using the multi-terminal soft open point in active distribution networks

    DOE PAGES

    Ji, Haoran; Wang, Chengshan; Li, Peng; ...

    2017-09-20

    The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less

  20. Practical Aspects of Stabilized FEM Discretizations of Nonlinear Conservation Law Systems with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Saini, Subhash (Technical Monitor)

    1999-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.

  1. An enhanced SOCP-based method for feeder load balancing using the multi-terminal soft open point in active distribution networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Haoran; Wang, Chengshan; Li, Peng

    The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less

  2. Graph Design via Convex Optimization: Online and Distributed Perspectives

    NASA Astrophysics Data System (ADS)

    Meng, De

    Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.

  3. Bayesian network classifiers for categorizing cortical GABAergic interneurons.

    PubMed

    Mihaljević, Bojan; Benavides-Piccione, Ruth; Bielza, Concha; DeFelipe, Javier; Larrañaga, Pedro

    2015-04-01

    An accepted classification of GABAergic interneurons of the cerebral cortex is a major goal in neuroscience. A recently proposed taxonomy based on patterns of axonal arborization promises to be a pragmatic method for achieving this goal. It involves characterizing interneurons according to five axonal arborization features, called F1-F5, and classifying them into a set of predefined types, most of which are established in the literature. Unfortunately, there is little consensus among expert neuroscientists regarding the morphological definitions of some of the proposed types. While supervised classifiers were able to categorize the interneurons in accordance with experts' assignments, their accuracy was limited because they were trained with disputed labels. Thus, here we automatically classify interneuron subsets with different label reliability thresholds (i.e., such that every cell's label is backed by at least a certain (threshold) number of experts). We quantify the cells with parameters of axonal and dendritic morphologies and, in order to predict the type, also with axonal features F1-F4 provided by the experts. Using Bayesian network classifiers, we accurately characterize and classify the interneurons and identify useful predictor variables. In particular, we discriminate among reliable examples of common basket, horse-tail, large basket, and Martinotti cells with up to 89.52% accuracy, and single out the number of branches at 180 μm from the soma, the convex hull 2D area, and the axonal features F1-F4 as especially useful predictors for distinguishing among these types. These results open up new possibilities for an objective and pragmatic classification of interneurons.

  4. On equivalent characterizations of convexity of functions

    NASA Astrophysics Data System (ADS)

    Gkioulekas, Eleftherios

    2013-04-01

    A detailed development of the theory of convex functions, not often found in complete form in most textbooks, is given. We adopt the strict secant line definition as the definitive definition of convexity. We then show that for differentiable functions, this definition becomes logically equivalent with the first derivative monotonicity definition and the tangent line definition. Consequently, for differentiable functions, all three characterizations are logically equivalent.

  5. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-05-21

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.

  6. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  7. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-01-01

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410

  8. Detection of faults in rotating machinery using periodic time-frequency sparsity

    NASA Astrophysics Data System (ADS)

    Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.

    2016-11-01

    This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.

  9. Phase conjugate Twyman-Green interferometer for testing spherical surfaces and lenses and for measuring refractive indices of liquids or solid transparent materials

    NASA Technical Reports Server (NTRS)

    Shukla, R. P.; Dokhanian, Mostafa; Venkateswarlu, Putcha; George, M. C.

    1990-01-01

    The present paper describes an application of a phase conjugate Twyman-Green interferometer using barium titanate as a self-pumping mirror for testing optical components like concave and convex spherical mirrors and lenses. The aberrations introduced by the beam splitter while testing concave or convex spherical mirrors of large aperture are automatically eliminated due to self-focussing property of the phase conjugate mirror. There is no necessity for a good spherical surface as a reference surface unlike in classical Twyman-Green interferometer or Williams interferometer. The phase conjugate Twyman Green interferometer with a divergent illumination can be used as a test plate for checking spherical surfaces. A nondestructive technique for measuring the refractive indices of a Fabry Perot etalon by using a phase conjugate interferometer is also suggested. The interferometer is found to be useful for measuring the refractive indices of liquids and solid transparent materials with an accuracy of the order of + or - 0.0004.

  10. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  11. Diffractive optical elements on non-flat substrates using electron beam lithography

    NASA Technical Reports Server (NTRS)

    Maker, Paul D. (Inventor); Muller, Richard E. (Inventor); Wilson, Daniel W. (Inventor)

    2002-01-01

    The present disclosure describes a technique for creating diffraction gratings on curved surfaces with electron beam lithography. The curved surface can act as an optical element to produce flat and aberration-free images in imaging spectrometers. In addition, the fabrication technique can modify the power structure of the grating orders so that there is more energy in the first order than for a typical grating. The inventors noticed that by using electron-beam lithography techniques, a variety of convex gratings that are well-suited to the requirements of imaging spectrometers can be manufactured.

  12. Convexity and concavity constants in Lorentz and Marcinkiewicz spaces

    NASA Astrophysics Data System (ADS)

    Kaminska, Anna; Parrish, Anca M.

    2008-07-01

    We provide here the formulas for the q-convexity and q-concavity constants for function and sequence Lorentz spaces associated to either decreasing or increasing weights. It yields also the formula for the q-convexity constants in function and sequence Marcinkiewicz spaces. In this paper we extent and enhance the results from [G.J.O. Jameson, The q-concavity constants of Lorentz sequence spaces and related inequalities, Math. Z. 227 (1998) 129-142] and [A. Kaminska, A.M. Parrish, The q-concavity and q-convexity constants in Lorentz spaces, in: Banach Spaces and Their Applications in Analysis, Conference in Honor of Nigel Kalton, May 2006, Walter de Gruyter, Berlin, 2007, pp. 357-373].

  13. Convexity of quantum χ2-divergence.

    PubMed

    Hansen, Frank

    2011-06-21

    The general quantum χ(2)-divergence has recently been introduced by Temme et al. [Temme K, Kastoryano M, Ruskai M, Wolf M, Verstrate F (2010) J Math Phys 51:122201] and applied to quantum channels (quantum Markov processes). The quantum χ(2)-divergence is not unique, as opposed to the classical χ(2)-divergence, but depends on the choice of quantum statistics. It was noticed that the elements in a particular one-parameter family of quantum χ(2)-divergences are convex functions in the density matrices (ρ,σ), thus mirroring the convexity of the classical χ(2)(p,q)-divergence in probability distributions (p,q). We prove that any quantum χ(2)-divergence is a convex function in its two arguments.

  14. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  15. Certification trails and software design for testability

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.

    1993-01-01

    Design techniques which may be applied to make program testing easier were investigated. Methods for modifying a program to generate additional data which we refer to as a certification trail are presented. This additional data is designed to allow the program output to be checked more quickly and effectively. Certification trails were described primarily from a theoretical perspective. A comprehensive attempt to assess experimentally the performance and overall value of the certification trail method is reported. The method was applied to nine fundamental, well-known algorithms for the following problems: convex hull, sorting, huffman tree, shortest path, closest pair, line segment intersection, longest increasing subsequence, skyline, and voronoi diagram. Run-time performance data for each of these problems is given, and selected problems are described in more detail. Our results indicate that there are many cases in which certification trails allow for significantly faster overall program execution time than a 2-version programming approach, and also give further evidence of the breadth of applicability of this method.

  16. Stability analysis of switched cellular neural networks: A mode-dependent average dwell time approach.

    PubMed

    Huang, Chuangxia; Cao, Jie; Cao, Jinde

    2016-10-01

    This paper addresses the exponential stability of switched cellular neural networks by using the mode-dependent average dwell time (MDADT) approach. This method is quite different from the traditional average dwell time (ADT) method in permitting each subsystem to have its own average dwell time. Detailed investigations have been carried out for two cases. One is that all subsystems are stable and the other is that stable subsystems coexist with unstable subsystems. By employing Lyapunov functionals, linear matrix inequalities (LMIs), Jessen-type inequality, Wirtinger-based inequality, reciprocally convex approach, we derived some novel and less conservative conditions on exponential stability of the networks. Comparing to ADT, the proposed MDADT show that the minimal dwell time of each subsystem is smaller and the switched system stabilizes faster. The obtained results extend and improve some existing ones. Moreover, the validness and effectiveness of these results are demonstrated through numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. GPU-based prompt gamma ray imaging from boron neutron capture therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less

  18. TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Suh, T; Yoon, D

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less

  19. Nonconvex model predictive control for commercial refrigeration

    NASA Astrophysics Data System (ADS)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  20. Entropy and convexity for nonlinear partial differential equations

    PubMed Central

    Ball, John M.; Chen, Gui-Qiang G.

    2013-01-01

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue. PMID:24249768

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  2. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  3. Entropy and convexity for nonlinear partial differential equations.

    PubMed

    Ball, John M; Chen, Gui-Qiang G

    2013-12-28

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue.

  4. The roles of the convex hull and the number of potential intersections in performance on visually presented traveling salesperson problems.

    PubMed

    Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter

    2003-10-01

    The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.

  5. Intraday Seasonalities and Nonstationarity of Trading Volume in Financial Markets: Individual and Cross-Sectional Features.

    PubMed

    Graczyk, Michelle B; Duarte Queirós, Sílvio M

    2016-01-01

    We study the intraday behaviour of the statistical moments of the trading volume of the blue chip equities that composed the Dow Jones Industrial Average index between 2003 and 2014. By splitting that time interval into semesters, we provide a quantitative account of the nonstationary nature of the intraday statistical properties as well. Explicitly, we prove the well-known ∪-shape exhibited by the average trading volume-as well as the volatility of the price fluctuations-experienced a significant change from 2008 (the year of the "subprime" financial crisis) onwards. That has resulted in a faster relaxation after the market opening and relates to a consistent decrease in the convexity of the average trading volume intraday profile. Simultaneously, the last part of the session has become steeper as well, a modification that is likely to have been triggered by the new short-selling rules that were introduced in 2007 by the Securities and Exchange Commission. The combination of both results reveals that the ∪ has been turning into a ⊔. Additionally, the analysis of higher-order cumulants-namely the skewness and the kurtosis-shows that the morning and the afternoon parts of the trading session are each clearly associated with different statistical features and hence dynamical rules. Concretely, we claim that the large initial trading volume is due to wayward stocks whereas the large volume during the last part of the session hinges on a cohesive increase of the trading volume. That dissimilarity between the two parts of the trading session is stressed in periods of higher uproar in the market.

  6. Convexity Conditions and the Legendre-Fenchel Transform for the Product of Finitely Many Positive Definite Quadratic Forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Yunbin, E-mail: zhaoyy@maths.bham.ac.u

    2010-12-15

    While the product of finitely many convex functions has been investigated in the field of global optimization, some fundamental issues such as the convexity condition and the Legendre-Fenchel transform for the product function remain unresolved. Focusing on quadratic forms, this paper is aimed at addressing the question: When is the product of finitely many positive definite quadratic forms convex, and what is the Legendre-Fenchel transform for it? First, we show that the convexity of the product is determined intrinsically by the condition number of so-called 'scaled matrices' associated with quadratic forms involved. The main result claims that if the conditionmore » number of these scaled matrices are bounded above by an explicit constant (which depends only on the number of quadratic forms involved), then the product function is convex. Second, we prove that the Legendre-Fenchel transform for the product of positive definite quadratic forms can be expressed, and the computation of the transform amounts to finding the solution to a system of equations (or equally, finding a Brouwer's fixed point of a mapping) with a special structure. Thus, a broader question than the open 'Question 11' in Hiriart-Urruty (SIAM Rev. 49, 225-273, 2007) is addressed in this paper.« less

  7. The effects of an editor serving as one of the reviewers during the peer-review process.

    PubMed

    Giordan, Marco; Csikasz-Nagy, Attila; Collings, Andrew M; Vaggi, Federico

    2016-01-01

    Background Publishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications. Methods Here we examine an element of the editorial process at eLife , in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions to eLife since June 2012, of which 2,747 were sent for peer review. This subset of 2747 papers was then analysed in detail.   Results The Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and five days faster for papers that were rejected after peer review (n=1,099). Moreover, editors acting as reviewers had no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates. Conclusions An important aspect of eLife 's peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.

  8. Global solutions in higher dimensions to a fourth-order parabolic equation modeling epitaxial thin-film growth

    NASA Astrophysics Data System (ADS)

    Winkler, Michael

    2011-08-01

    The initial-value problem for u_t=-Δ^2 u - μΔ u - λ Δ |nabla u|^2 + f(x)qquad qquad (star) is studied under the conditions {{partial/partialν} u={partial/partialν} Δ u=0} on the boundary of a bounded convex domain {Ω subset {{R}}^n} with smooth boundary. This problem arises in the modeling of the evolution of a thin surface when exposed to molecular beam epitaxy. Correspondingly the physically most relevant spatial setting is obtained when n = 2, but previous mathematical results appear to concentrate on the case n = 1. In this work, it is proved that when n ≤ 3, μ ≥ 0, λ > 0 and {f in L^infty(Ω)} satisfies {{int_Ω} f ge 0}, for each prescribed initial distribution {u_0 in L^infty(Ω)} fulfilling {{int_Ω} u_0 ge 0}, there exists at least one global weak solution {u in L^2_{loc}([0,infty); W^{1,2}(Ω))} satisfying {{int_Ω} u(\\cdot,t) ge 0} for a.e. t > 0, and moreover, it is shown that this solution can be obtained through a Rothe-type approximation scheme. Furthermore, under an additional smallness condition on μ and {\\|f\\|_{L^infty(Ω)}}, it is shown that there exists a bounded set {Ssubset L^1(Ω)} which is absorbing for {(star)} in the sense that for any such solution, we can pick T > 0 such that {e^{2λ u(\\cdot,t)}in S} for all t > T, provided that Ω is a ball and u 0 and f are radially symmetric with respect to x = 0. This partially extends similar absorption results known in the spatially one-dimensional case. The techniques applied to derive appropriate compactness properties via a priori estimates include straightforward testing procedures which lead to integral inequalities involving, for instance, the functional {{int_Ω} e^{2λ u}dx}, but also the use of a maximum principle for second-order elliptic equations.

  9. Pre-vector variational inequality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lai-Jiu

    1994-12-31

    Let X be a Hausdorff topological vector space, (Y, D) be an ordered Hausdorff topological vector space ordered by convex cone D. Let L(X, Y) be the space of all bounded linear operator, E {improper_subset} X be a nonempty set, T : E {yields} L(X, Y), {eta} : E {times} E {yields} E be functions. For x, y {element_of} Y, we denote x {not_lt} y if y - x intD, where intD is the interior of D. We consider the following two problems: Find x {element_of} E such that < T(x), {eta}(y, x) > {not_lt} 0 for all y {element_of}more » E and find x {element_of} E, < T(x), {eta}(y, x) > {not_gt} 0 for all y {element_of} E and < T(x), {eta}(y, x) >{element_of} C{sub p}{sup w+} = {l_brace} {element_of} L(X, Y) {vert_bar}< l, {eta}(x, 0) >{not_lt} 0 for all x {element_of} E{r_brace} where < T(x), y > denotes linear operator T(x) at y, that is T(x), (y). We called Pre-VVIP the Pre-vector variational inequality problem and Pre-VCP complementary problem. If X = R{sup n}, Y = R, D = R{sub +} {eta}(y, x) = y - x, then our problem is the well-known variational inequality first studies by Hartman and Stampacchia. If Y = R, D = R{sub +}, {eta}(y, x) = y - x, our problem is the variational problem in infinite dimensional space. In this research, we impose different condition on T(x), {eta}, X, and < T(x), {eta}(y, x) > and investigate the existences theorem of these problems. As an application of one of our results, we establish the existence theorem of weak minimum of the problem. (P) V - min f(x) subject to x {element_of} E where f : X {yields} Y si a Frechet differentiable invex function.« less

  10. A parallel Discrete Element Method to model collisions between non-convex particles

    NASA Astrophysics Data System (ADS)

    Rakotonirina, Andriarimina Daniel; Delenne, Jean-Yves; Wachs, Anthony

    2017-06-01

    In many dry granular and suspension flow configurations, particles can be highly non-spherical. It is now well established in the literature that particle shape affects the flow dynamics or the microstructure of the particles assembly in assorted ways as e.g. compacity of packed bed or heap, dilation under shear, resistance to shear, momentum transfer between translational and angular motions, ability to form arches and block the flow. In this talk, we suggest an accurate and efficient way to model collisions between particles of (almost) arbitrary shape. For that purpose, we develop a Discrete Element Method (DEM) combined with a soft particle contact model. The collision detection algorithm handles contacts between bodies of various shape and size. For nonconvex bodies, our strategy is based on decomposing a non-convex body into a set of convex ones. Therefore, our novel method can be called "glued-convex method" (in the sense clumping convex bodies together), as an extension of the popular "glued-spheres" method, and is implemented in our own granular dynamics code Grains3D. Since the whole problem is solved explicitly, our fully-MPI parallelized code Grains3D exhibits a very high scalability when dynamic load balancing is not required. In particular, simulations on up to a few thousands cores in configurations involving up to a few tens of millions of particles can readily be performed. We apply our enhanced numerical model to (i) the collapse of a granular column made of convex particles and (i) the microstructure of a heap of non-convex particles in a cylindrical reactor.

  11. Faster eating rates are associated with higher energy intakes during an ad libitum meal, higher BMI and greater adiposity among 4·5-year-old children: results from the Growing Up in Singapore Towards Healthy Outcomes (GUSTO) cohort.

    PubMed

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R; Sadananthan, Suresh A; Velan, S Sendhil; Michael, Navin; Tint, Mya-Thway; Fortier, Marielle V; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P; Meaney, Michael J; Broekman, Birit F P; Lee, Yung Seng; Godfrey, Keith M; Chong, Mary F F; Forde, Ciarán G

    2017-04-01

    Faster eating rates are associated with increased energy intake, but little is known about the relationship between children's eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesised that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (n 386) from the Growing Up in Singapore Towards Healthy Outcomes cohort participated in a video-recorded ad libitum lunch at 4·5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). BMI and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n 153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r 0·61, P<0·001) and a positive linear relationship between eating rate and children's BMI status. Faster eaters consumed 75 % more energy content than slower eating children (Δ548 kJ (Δ131 kcal); 95 % CI 107·6, 154·4, P<0·001), and had higher whole-body (P<0·05) and subcutaneous abdominal adiposity (Δ118·3 cc; 95 % CI 24·0, 212·7, P=0·014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b 13·59; 95 % CI 7·48, 21·83). Children who ate faster had higher energy intake, and this was associated with increased BMI z-score and adiposity.

  12. Faster eating rates are associated with higher energy intakes during an Ad libitum meal, higher BMI and greater adiposity among 4.5 year old children – Results from the GUSTO cohort

    PubMed Central

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R.; Sadananthan, Suresh Anand; Velan, S. Sendhil; Michael, Navin; Tint, Mya Thway; Fortier, Marielle Valerie; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P.; Meaney, Michael J.; Broekman, Birit F.P.; Lee, Yung Seng; Godfrey, Keith M.; Chong, Mary Foong Fong; Forde, Ciarán Gerard

    2017-01-01

    Faster eating rates are associated with increased energy intake, but less is known about the relationship between children’s eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesized that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (N=386) from the Growing Up in Singapore towards Healthy Outcomes (GUSTO) cohort participated in a video-recorded ad libitum lunch at 4.5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). Body mass index (BMI) and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n=153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r=0.61, p<0.001) and a positive linear relationship between eating rate and children’s BMI status. Faster eaters consumed 75% more calories than slower eating children (Δ131 kcal, 95%CI [107.6, 154.4], p<0.001), and had higher whole-body (p<0.05) and subcutaneous abdominal adiposity (Δ118.3 cc; 95%CI [24.0, 212.7], p=0.014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b=13.59, 95% CI [7.48, 21.83]). Children who ate faster had higher energy intake, and this was associated with increased BMIz and adiposity. PMID:28462734

  13. Species Profiles: Life Histories and Environmental Requirements of Coastal Fishes and Invertebrates (Gulf of Mexico). Blue Crab.

    DTIC Science & Technology

    1986-06-01

    depending upon the physiological requirements of each particular stage in its life history. Spawning occurs from spring through fall in high salinity ...developed. Carapace about Order .... ............ Decapoda 2.5 times as wide as long, moderately Infraorder .......... ... Brachyura convex and nearly...cent marine waters; salinities in nates with a final ecdysis). After excess of 20.0 ppt are required for insemination, the male continues to

  14. Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization

    NASA Astrophysics Data System (ADS)

    Kolosnitsyn, A. V.

    2018-02-01

    The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.

  15. Another convex combination of product states for the separable Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028

    2006-03-15

    In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less

  16. Thermal Protection System with Staggered Joints

    NASA Technical Reports Server (NTRS)

    Simon, Xavier D. (Inventor); Robinson, Michael J. (Inventor); Andrews, Thomas L. (Inventor)

    2014-01-01

    The thermal protection system disclosed herein is suitable for use with a spacecraft such as a reentry module or vehicle, where the spacecraft has a convex surface to be protected. An embodiment of the thermal protection system includes a plurality of heat resistant panels, each having an outer surface configured for exposure to atmosphere, an inner surface opposite the outer surface and configured for attachment to the convex surface of the spacecraft, and a joint edge defined between the outer surface and the inner surface. The joint edges of adjacent ones of the heat resistant panels are configured to mate with each other to form staggered joints that run between the peak of the convex surface and the base section of the convex surface.

  17. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  18. The concave cusp as a determiner of figure-ground.

    PubMed

    Stevens, K A; Brookes, A

    1988-01-01

    The tendency to interpret as figure, relative to background, those regions that are lighter, smaller, and, especially, more convex is well known. Wherever convex opaque objects abut or partially occlude one another in an image, the points of contact between the silhouettes form concave cusps, each indicating the local assignment of figure versus ground across the contour segments. It is proposed that this local geometric feature is a preattentive determiner of figure-ground perception and that it contributes to the previously observed tendency for convexity preference. Evidence is presented that figure-ground assignment can be determined solely on the basis of the concave cusp feature, and that the salience of the cusp derives from local geometry and not from adjacent contour convexity.

  19. Detection of Functional Change Using Cluster Trend Analysis in Glaucoma.

    PubMed

    Gardiner, Stuart K; Mansberger, Steven L; Demirel, Shaban

    2017-05-01

    Global analyses using mean deviation (MD) assess visual field progression, but can miss localized changes. Pointwise analyses are more sensitive to localized progression, but more variable so require confirmation. This study assessed whether cluster trend analysis, averaging information across subsets of locations, could improve progression detection. A total of 133 test-retest eyes were tested 7 to 10 times. Rates of change and P values were calculated for possible re-orderings of these series to generate global analysis ("MD worsening faster than x dB/y with P < y"), pointwise and cluster analyses ("n locations [or clusters] worsening faster than x dB/y with P < y") with specificity exactly 95%. These criteria were applied to 505 eyes tested over a mean of 10.5 years, to find how soon each detected "deterioration," and compared using survival models. This was repeated including two subsequent visual fields to determine whether "deterioration" was confirmed. The best global criterion detected deterioration in 25% of eyes in 5.0 years (95% confidence interval [CI], 4.7-5.3 years), compared with 4.8 years (95% CI, 4.2-5.1) for the best cluster analysis criterion, and 4.1 years (95% CI, 4.0-4.5) for the best pointwise criterion. However, for pointwise analysis, only 38% of these changes were confirmed, compared with 61% for clusters and 76% for MD. The time until 25% of eyes showed subsequently confirmed deterioration was 6.3 years (95% CI, 6.0-7.2) for global, 6.3 years (95% CI, 6.0-7.0) for pointwise, and 6.0 years (95% CI, 5.3-6.6) for cluster analyses. Although the specificity is still suboptimal, cluster trend analysis detects subsequently confirmed deterioration sooner than either global or pointwise analyses.

  20. Different gene-specific mechanisms determine the 'revised-response' memory transcription patterns of a subset of A. thaliana dehydration stress responding genes.

    PubMed

    Liu, Ning; Ding, Yong; Fromm, Michael; Avramova, Zoya

    2014-05-01

    Plants that have experienced several exposures to dehydration stress show increased resistance to future exposures by producing faster and/or stronger reactions, while many dehydration stress responding genes in Arabidopsis thaliana super-induce their transcription as a 'memory' from the previous encounter. A previously unknown, rather unusual, memory response pattern is displayed by a subset of the dehydration stress response genes. Despite robustly responding to a first stress, these genes return to their initial, pre-stressed, transcript levels during the watered recovery; surprisingly, they do not respond further to subsequent stresses of similar magnitude and duration. This transcriptional behavior defines the 'revised-response' memory genes. Here, we investigate the molecular mechanisms regulating this transcription memory behavior. Potential roles of abscisic acid (ABA), of transcription factors (TFs) from the ABA signaling pathways (ABF2/3/4 and MYC2), and of histone modifications (H3K4me3 and H3K27me3) as factors in the revised-response transcription memory patterns are elucidated. We identify the TF MYC2 as the critical component for the memory behavior of a specific subset of MYC2-dependent genes. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Ensemble Response in Mushroom Body Output Neurons of the Honey Bee Outpaces Spatiotemporal Odor Processing Two Synapses Earlier in the Antennal Lobe

    PubMed Central

    Strube-Bloss, Martin F.; Herrera-Valdez, Marco A.; Smith, Brian H.

    2012-01-01

    Neural representations of odors are subject to computations that involve sequentially convergent and divergent anatomical connections across different areas of the brains in both mammals and insects. Furthermore, in both mammals and insects higher order brain areas are connected via feedback connections. In order to understand the transformations and interactions that this connectivity make possible, an ideal experiment would compare neural responses across different, sequential processing levels. Here we present results of recordings from a first order olfactory neuropile – the antennal lobe (AL) – and a higher order multimodal integration and learning center – the mushroom body (MB) – in the honey bee brain. We recorded projection neurons (PN) of the AL and extrinsic neurons (EN) of the MB, which provide the outputs from the two neuropils. Recordings at each level were made in different animals in some experiments and simultaneously in the same animal in others. We presented two odors and their mixture to compare odor response dynamics as well as classification speed and accuracy at each neural processing level. Surprisingly, the EN ensemble significantly starts separating odor stimuli rapidly and before the PN ensemble has reached significant separation. Furthermore the EN ensemble at the MB output reaches a maximum separation of odors between 84–120 ms after odor onset, which is 26 to 133 ms faster than the maximum separation at the AL output ensemble two synapses earlier in processing. It is likely that a subset of very fast PNs, which respond before the ENs, may initiate the rapid EN ensemble response. We suggest therefore that the timing of the EN ensemble activity would allow retroactive integration of its signal into the ongoing computation of the AL via centrifugal feedback. PMID:23209711

  2. Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.

    2007-04-19

    We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewestmore » floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.« less

  3. Motor ability and inhibitory processes in children with ADHD: a neuroelectric study.

    PubMed

    Hung, Chiao-Ling; Chang, Yu-Kai; Chan, Yuan-Shuo; Shih, Chia-Hao; Huang, Chung-Ju; Hung, Tsung-Min

    2013-06-01

    The purpose of the current study was to examine the relationship between motor ability and response inhibition using behavioral and electrophysiological indices in children with ADHD. A total of 32 participants were recruited and underwent a motor ability assessment by administering the Basic Motor Ability Test-Revised (BMAT) as well as the Go/No-Go task and event-related potential (ERP) measurements at the same time. The results indicated that the BMAT scores were positively associated with the behavioral and ERP measures. Specifically, the BMAT average score was associated with a faster reaction time and higher accuracy, whereas higher BMAT subset scores predicted a shorter P3 latency in the Go condition. Although the association between the BMAT average score and the No-Go accuracy was limited, higher BMAT average and subset scores predicted a shorter N2 and P3 latency and a larger P3 amplitude in the No-Go condition. These findings suggest that motor abilities may play roles that benefit the cognitive performance of ADHD children.

  4. When the Desert Beetle Met the Carnivorous Plant: A Perfect Match for Droplet Growth and Shedding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aizenberg, Joanna; Park, Kyoo Chul; Kim, Philseok

    2015-01-14

    Phase change of vapor followed by coalescence and transport on ubiquitous bumped or curved surfaces is of fundamental importance for a wide range of phenomena and applications from water condensation on cold beverage bottles, to fogging on glasses and windshields, self-cleaning by jumping droplets, weathering, self-assembly, desalination, latent heat transfer, etc. Over the past decades, many attempts to understand and control the droplet growth dynamics and shedding of condensates on textured surfaces have focused on finding the role of micro/nanotexture combined with wettability. In particular, inspired by the Namib desert beetle bump structure, studies tested the effect of topography onmore » the preferential condensation. However, like the preferential condensation observed on flat surfaces, hybrid wettability rather than texture plays a major role; the role of bump topography on local preferential condensation has been unexplored and still not clearly understood. In addition, given that not only facilitating the droplet growth but also transporting the condensed droplets toward the desired reservoir is essential to make fresh sites for renucleation and regrowth of the droplets for enhancing condensation efficiency, the current hybrid-wettability- based design is not efficient to transport the condensates due to the high contact angle hysteresis created by highly wettable pinning points. Here we show that beetle-inspired bump topography leads faster localized condensation and transport of water. Employing simple analytic and more complicated numerical calculations, we reveal the detailed role of topography and predict the focused diffusion flux based on the distortion of concentration gradient around convex surface topography. We experimentally demonstrate the systematic understanding on the unseen effect of topographical parameters on faster droplet growth dynamics on various bump geometries. Further rational design of asymmetric topography and synergetic combination with slippery coating simultaneously enable both faster droplet growth and transport for applications including efficient water condensation.« less

  5. 32 CFR 1700.7 - Processing of requests for records.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of Information Act Request.” (b) Electronic Reading Room. ODNI maintains an online FOIA Reading Room... limit the scope of their requests in order to qualify for faster processing within the specified limits of its faster track. ...

  6. 32 CFR 1700.7 - Processing of requests for records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of Information Act Request.” (b) Electronic Reading Room. ODNI maintains an online FOIA Reading Room... limit the scope of their requests in order to qualify for faster processing within the specified limits of its faster track. ...

  7. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  8. Remodelling of the bovine placenta: Comprehensive morphological and histomorphological characterization at the late embryonic and early accelerated fetal growth stages.

    PubMed

    Estrella, Consuelo Amor S; Kind, Karen L; Derks, Anna; Xiang, Ruidong; Faulkner, Nicole; Mohrdick, Melina; Fitzsimmons, Carolyn; Kruk, Zbigniew; Grutzner, Frank; Roberts, Claire T; Hiendleder, Stefan

    2017-07-01

    Placental function impacts growth and development with lifelong consequences for performance and health. We provide novel insights into placental development in bovine, an important agricultural species and biomedical model. Concepti with defined genetics and sex were recovered from nulliparous dams managed under standardized conditions to study placental gross morphological and histomorphological parameters at the late embryo (Day48) and early accelerated fetal growth (Day153) stages. Placentome number increased 3-fold between Day48 and Day153. Placental barrier thickness was thinner, and volume of placental components, and surface areas and densities were higher at Day153 than Day48. We confirmed two placentome types, flat and convex. At Day48, there were more convex than flat placentomes, and convex placentomes had a lower proportion of maternal connective tissue (P < 0.01). However, this was reversed at Day153, where convex placentomes were lower in number and had greater volume of placental components (P < 0.01- P < 0.001) and greater surface area (P < 0.001) than flat placentomes. Importantly, embryo (r = 0.50) and fetal (r = 0.30) weight correlated with total number of convex but not flat placentomes. Extensive remodelling of the placenta increases capacity for nutrient exchange to support rapidly increasing embryo-fetal weight from Day48 to Day153. The cellular composition of convex placentomes, and exclusive relationships between convex placentome number and embryo-fetal weight, provide strong evidence for these placentomes as drivers of prenatal growth. The difference in proportion of maternal connective tissue between placentome types at Day48 suggests that this tissue plays a role in determining placentome shape, further highlighting the importance of early placental development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Polar DuaLs of Convex Bodies

    DTIC Science & Technology

    1990-01-01

    Verlag 1976. 17. C. G. Lekkerkerker, Geometry of Numbers, Wolters-Noordhoff, Groningen, 1969. 18. E . Lutwak , "Dual Mixed Volumes," Pacific Journal of...Mathematics, Vol. 58, No. 2, 1975. 19. E . Lutwak , "On Cross-Sectional Measures of Polar Reciprocal Convex Bodies," Geometriae Dedicata 5, (1976) 79-80...20. E . Lutwak , "Blaschke-Santal6 Inequality, Discrete Geometry and Convexity," Annals of the New York Academy of Sciences 440 (1985) pp 106-112. 21. V

  10. Single lens laser beam shaper

    DOEpatents

    Liu, Chuyu [Newport News, VA; Zhang, Shukui [Yorktown, VA

    2011-10-04

    A single lens bullet-shaped laser beam shaper capable of redistributing an arbitrary beam profile into any desired output profile comprising a unitary lens comprising: a convex front input surface defining a focal point and a flat output portion at the focal point; and b) a cylindrical core portion having a flat input surface coincident with the flat output portion of the first input portion at the focal point and a convex rear output surface remote from the convex front input surface.

  11. Turbulent boundary layers subjected to multiple curvatures and pressure gradients

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Promode R.; Ahmed, Anwar

    1993-01-01

    The effects of abruptly applied cycles of curvatures and pressure gradients on turbulent boundary layers are examined experimentally. Two two-dimensional curved test surfaces are considered: one has a sequence of concave and convex longitudinal surface curvatures and the other has a sequence of convex and concave curvatures. The choice of the curvature sequences were motivated by a desire to study the asymmetric response of turbulent boundary layers to convex and concave curvatures. The relaxation of a boundary layer from the effects of these two opposite sequences has been compared. The effect of the accompaying sequences of pressure gradient has also been examined but the effect of curvature dominates. The growth of internal layers at the curvature junctions have been studied. Measurements of the Gortler and corner vortex systems have been made. The boundary layer recovering from the sequence of concave to convex curvature has a sustained lower skin friction level than in that recovering from the sequence of convex to concave curvature. The amplification and suppression of turbulence due to the curvature sequences have also been studied.

  12. Non-convex dissipation potentials in multiscale non-equilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Janečka, Adam; Pavelka, Michal

    2018-04-01

    Reformulating constitutive relation in terms of gradient dynamics (being derivative of a dissipation potential) brings additional information on stability, metastability and instability of the dynamics with respect to perturbations of the constitutive relation, called CR-stability. CR-instability is connected to the loss of convexity of the dissipation potential, which makes the Legendre-conjugate dissipation potential multivalued and causes dissipative phase transitions that are not induced by non-convexity of free energy, but by non-convexity of the dissipation potential. CR-stability of the constitutive relation with respect to perturbations is then manifested by constructing evolution equations for the perturbations in a thermodynamically sound way (CR-extension). As a result, interesting experimental observations of behavior of complex fluids under shear flow and supercritical boiling curve can be explained.

  13. Modified surface testing method for large convex aspheric surfaces based on diffraction optics.

    PubMed

    Zhang, Haidong; Wang, Xiaokun; Xue, Donglin; Zhang, Xuejun

    2017-12-01

    Large convex aspheric optical elements have been widely applied in advanced optical systems, which have presented a challenging metrology problem. Conventional testing methods cannot satisfy the demand gradually with the change of definition of "large." A modified method is proposed in this paper, which utilizes a relatively small computer-generated hologram and an illumination lens with certain feasibility to measure the large convex aspherics. Two example systems are designed to demonstrate the applicability, and also, the sensitivity of this configuration is analyzed, which proves the accuracy of the configuration can be better than 6 nm with careful alignment and calibration of the illumination lens in advance. Design examples and analysis show that this configuration is applicable to measure the large convex aspheric surfaces.

  14. Anthropometric change: implications for office ergonomics.

    PubMed

    Gordon, Claire C; Bradtmiller, Bruce

    2012-01-01

    Well-designed office workspaces require good anthropometric data in order to accommodate variability in the worker population. The recent obesity epidemic carries with it a number of anthropometric changes that have significant impact on design. We examine anthropometric change among US civilians over the last 50 years, and then examine that change in a subset of the US population--the US military--as military data sets often have more ergonomic dimensions than civilian ones. The civilian mean stature increased throughout the period 1962 to 2006 for both males and females. However, the rate of increase in mean weight was considerably faster. As a result, the male obesity rate changed from 10.7% in 1962 to 31.3% in 2006. The female change for the same period was 15.8% to 33.2%. In the Army, the proportion of obesity increased from 3.6% to 20.9%, in males. In the absence of national US ergonomic data, we demonstrate one approach to tracking civilian change in these dimensions, applying military height/weight regression equations to the civilian population estimates. This approach is useful for population monitoring but is not suitable for establishing new design limits, as regression estimates likely underestimate the change at the ends of the distribution.

  15. DrugECs: An Ensemble System with Feature Subspaces for Accurate Drug-Target Interaction Prediction

    PubMed Central

    Jiang, Jinjian; Wang, Nian; Zhang, Jun

    2017-01-01

    Background Drug-target interaction is key in drug discovery, especially in the design of new lead compound. However, the work to find a new lead compound for a specific target is complicated and hard, and it always leads to many mistakes. Therefore computational techniques are commonly adopted in drug design, which can save time and costs to a significant extent. Results To address the issue, a new prediction system is proposed in this work to identify drug-target interaction. First, drug-target pairs are encoded with a fragment technique and the software “PaDEL-Descriptor.” The fragment technique is for encoding target proteins, which divides each protein sequence into several fragments in order and encodes each fragment with several physiochemical properties of amino acids. The software “PaDEL-Descriptor” creates encoding vectors for drug molecules. Second, the dataset of drug-target pairs is resampled and several overlapped subsets are obtained, which are then input into kNN (k-Nearest Neighbor) classifier to build an ensemble system. Conclusion Experimental results on the drug-target dataset showed that our method performs better and runs faster than the state-of-the-art predictors. PMID:28744468

  16. Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU

    PubMed Central

    Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.

    2013-01-01

    List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015

  17. Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.

    PubMed

    Bexelius, Tobias; Sohlberg, Antti

    2018-06-01

    Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.

  18. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, S.; Tadmor, E.

    1985-01-01

    A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  19. Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint

    NASA Astrophysics Data System (ADS)

    Rodrigues, José Francisco; Santos, Lisa

    2012-08-01

    We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.

  20. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  1. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  2. State estimation of spatio-temporal phenomena

    NASA Astrophysics Data System (ADS)

    Yu, Dan

    This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.

  3. Comparison of thawing and freezing dark energy parametrizations

    NASA Astrophysics Data System (ADS)

    Pantazis, G.; Nesseris, S.; Perivolaropoulos, L.

    2016-05-01

    Dark energy equation of state w (z ) parametrizations with two parameters and given monotonicity are generically either convex or concave functions. This makes them suitable for fitting either freezing or thawing quintessence models but not both simultaneously. Fitting a data set based on a freezing model with an unsuitable (concave when increasing) w (z ) parametrization [like Chevallier-Polarski-Linder (CPL)] can lead to significant misleading features like crossing of the phantom divide line, incorrect w (z =0 ), incorrect slope, etc., that are not present in the underlying cosmological model. To demonstrate this fact we generate scattered cosmological data at both the level of w (z ) and the luminosity distance DL(z ) based on either thawing or freezing quintessence models and fit them using parametrizations of convex and of concave type. We then compare statistically significant features of the best fit w (z ) with actual features of the underlying model. We thus verify that the use of unsuitable parametrizations can lead to misleading conclusions. In order to avoid these problems it is important to either use both convex and concave parametrizations and select the one with the best χ2 or use principal component analysis thus splitting the redshift range into independent bins. In the latter case, however, significant information about the slope of w (z ) at high redshifts is lost. Finally, we propose a new family of parametrizations w (z )=w0+wa(z/1 +z )n which generalizes the CPL and interpolates between thawing and freezing parametrizations as the parameter n increases to values larger than 1.

  4. Phase-change memory function of correlated electrons in organic conductors

    NASA Astrophysics Data System (ADS)

    Oike, H.; Kagawa, F.; Ogawa, N.; Ueda, A.; Mori, H.; Kawasaki, M.; Tokura, Y.

    2015-01-01

    Phase-change memory (PCM), a promising candidate for next-generation nonvolatile memories, exploits quenched glassy and thermodynamically stable crystalline states as reversibly switchable state variables. We demonstrate PCM functions emerging from a charge-configuration degree of freedom in strongly correlated electron systems. Nonvolatile reversible switching between a high-resistivity charge-crystalline (or charge-ordered) state and a low-resistivity quenched state, charge glass, is achieved experimentally via heat pulses supplied by optical or electrical means in organic conductors θ -(BEDT-TTF)2X . Switching that is one order of magnitude faster is observed in another isostructural material that requires faster cooling to kinetically avoid charge crystallization, indicating that the material's critical cooling rate can be useful guidelines for pursuing a faster correlated-electron PCM function.

  5. Improved flight-simulator viewing lens

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M.

    1979-01-01

    Triplet lens system uses two acrylic plastic double convex lenses and one polystyrene plastic single convex lens to reduce chromatic distortion and lateral aberation, especially at large field angles within in-line systems of flight simulators.

  6. Stereotype locally convex spaces

    NASA Astrophysics Data System (ADS)

    Akbarov, S. S.

    2000-08-01

    We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.

  7. Interface Shape Control Using Localized Heating during Bridgman Growth

    NASA Technical Reports Server (NTRS)

    Volz, M. P.; Mazuruk, K.; Aggarwal, M. D.; Croll, A.

    2008-01-01

    Numerical calculations were performed to assess the effect of localized radial heating on the melt-crystal interface shape during vertical Bridgman growth. System parameters examined include the ampoule, melt and crystal thermal conductivities, the magnitude and width of localized heating, and the latent heat of crystallization. Concave interface shapes, typical of semiconductor systems, could be flattened or made convex with localized heating. Although localized heating caused shallower thermal gradients ahead of the interface, the magnitude of the localized heating required for convexity was less than that which resulted in a thermal inversion ahead of the interface. A convex interface shape was most readily achieved with ampoules of lower thermal conductivity. Increasing melt convection tended to flatten the interface, but the amount of radial heating required to achieve a convex interface was essentially independent of the convection intensity.

  8. Pin stack array for thermoacoustic energy conversion

    DOEpatents

    Keolian, Robert M.; Swift, Gregory W.

    1995-01-01

    A thermoacoustic stack for connecting two heat exchangers in a thermoacoustic energy converter provides a convex fluid-solid interface in a plane perpendicular to an axis for acoustic oscillation of fluid between the two heat exchangers. The convex surfaces increase the ratio of the fluid volume in the effective thermoacoustic volume that is displaced from the convex surface to the fluid volume that is adjacent the surface within which viscous energy losses occur. Increasing the volume ratio results in an increase in the ratio of transferred thermal energy to viscous energy losses, with a concomitant increase in operating efficiency of the thermoacoustic converter. The convex surfaces may be easily provided by a pin array having elements arranged parallel to the direction of acoustic oscillations and with effective radial dimensions much smaller than the thicknesses of the viscous energy loss and thermoacoustic energy transfer volumes.

  9. On the complexity of a combined homotopy interior method for convex programming

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  10. Ordering Elements and Subsets: Examples for Student Understanding

    ERIC Educational Resources Information Center

    Mellinger, Keith E.

    2004-01-01

    Teaching the art of counting can be quite difficult. Many undergraduate students have difficulty separating the ideas of permutation, combination, repetition, etc. This article develops some examples to help explain some of the underlying theory while looking carefully at the selection of various subsets of objects from a larger collection. The…

  11. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  12. Effective light absorption and its enhancement factor for silicon nanowire-based solar cell.

    PubMed

    Duan, Zhiqiang; Li, Meicheng; Mwenya, Trevor; Fu, Pengfei; Li, Yingfeng; Song, Dandan

    2016-01-01

    Although nanowire (NW) antireflection coating can enhance light trapping capability, which is generally used in crystal silicon (CS) based solar cells, whether it can improve light absorption in the CS body depends on the NW geometrical shape and their geometrical parameters. In order to conveniently compare with the bare silicon, two enhancement factors E(T) and E(A) are defined and introduced to quantitatively evaluate the efficient light trapping capability of NW antireflective layer and the effective light absorption capability of CS body. Five different shapes (cylindrical, truncated conical, convex conical, conical, and concave conical) of silicon NW arrays arranged in a square are studied, and the theoretical results indicate that excellent light trapping does not mean more light can be absorbed in the CS body. The convex conical NW has the best light trapping, but the concave conical NW has the best effective light absorption. Furthermore, if the cross section of silicon NW is changed into a square, both light trapping and effective light absorption are enhanced, and the Eiffel Tower shaped NW arrays have optimal effective light absorption.

  13. Real-Time Generation of the Footprints both on Floor and Ground

    NASA Astrophysics Data System (ADS)

    Hirano, Yousuke; Tanaka, Toshimitsu; Sagawa, Yuji

    This paper presents a real-time method for generating various footprints in relation to state of walking. In addition, the method is expanded to cover both on hard floor and soft ground. Results of the previous method were not so realistic, because the method places same simple foot prints on the motion path. Our method runs filters on the original pattern of footprint on GPU. And then our method gradates intensity of the pattern to two directions, in order to create partially dark footprints. Here parameters of the filter and the gradation are changed by move speed and direction. The pattern is mapped on a polygon. If the walker is pigeon-toed or bandy-legged, the polygon is rotated inside or outside, respectively. Finally, it is placed on floor. Footprints on soft ground are concavity and convexity caused by walking. Thus an original pattern of footprints on ground is defined as a height map. The height map is modified using the filter and the gradation operation developed for floor footprints. The height map is converted to a bump map to fast display the concavity and convexity of footprints.

  14. The exponentiated Hencky-logarithmic strain energy. Part II: Coercivity, planar polyconvexity and existence of minimizers

    NASA Astrophysics Data System (ADS)

    Neff, Patrizio; Lankeit, Johannes; Ghiba, Ionel-Dumitrel; Martin, Robert; Steigmann, David

    2015-08-01

    We consider a family of isotropic volumetric-isochoric decoupled strain energies based on the Hencky-logarithmic (true, natural) strain tensor log U, where μ > 0 is the infinitesimal shear modulus, is the infinitesimal bulk modulus with the first Lamé constant, are dimensionless parameters, is the gradient of deformation, is the right stretch tensor and is the deviatoric part (the projection onto the traceless tensors) of the strain tensor log U. For small elastic strains, the energies reduce to first order to the classical quadratic Hencky energy which is known to be not rank-one convex. The main result in this paper is that in plane elastostatics the energies of the family are polyconvex for , extending a previous finding on its rank-one convexity. Our method uses a judicious application of Steigmann's polyconvexity criteria based on the representation of the energy in terms of the principal invariants of the stretch tensor U. These energies also satisfy suitable growth and coercivity conditions. We formulate the equilibrium equations, and we prove the existence of minimizers by the direct methods of the calculus of variations.

  15. PILA: Sub-Meter Localization Using CSI from Commodity Wi-Fi Devices

    PubMed Central

    Tian, Zengshan; Li, Ze; Zhou, Mu; Jin, Yue; Wu, Zipeng

    2016-01-01

    The aim of this paper is to present a new indoor localization approach by employing the Angle-of-arrival (AOA) and Received Signal Strength (RSS) measurements in Wi-Fi network. To achieve this goal, we first collect the Channel State Information (CSI) by using the commodity Wi-Fi devices with our designed three antennas to estimate the AOA of Wi-Fi signal. Second, we propose a direct path identification algorithm to obtain the direct signal path for the sake of reducing the interference of multipath effect on the AOA estimation. Third, we construct a new objective function to solve the localization problem by integrating the AOA and RSS information. Although the localization problem is non-convex, we use the Second-order Cone Programming (SOCP) relaxation approach to transform it into a convex problem. Finally, the effectiveness of our approach is verified based on the prototype implementation by using the commodity Wi-Fi devices. The experimental results show that our approach can achieve the median error 0.7 m in the actual indoor environment. PMID:27735879

  16. PILA: Sub-Meter Localization Using CSI from Commodity Wi-Fi Devices.

    PubMed

    Tian, Zengshan; Li, Ze; Zhou, Mu; Jin, Yue; Wu, Zipeng

    2016-10-10

    The aim of this paper is to present a new indoor localization approach by employing the Angle-of-arrival (AOA) and Received Signal Strength (RSS) measurements in Wi-Fi network. To achieve this goal, we first collect the Channel State Information (CSI) by using the commodity Wi-Fi devices with our designed three antennas to estimate the AOA of Wi-Fi signal. Second, we propose a direct path identification algorithm to obtain the direct signal path for the sake of reducing the interference of multipath effect on the AOA estimation. Third, we construct a new objective function to solve the localization problem by integrating the AOA and RSS information. Although the localization problem is non-convex, we use the Second-order Cone Programming (SOCP) relaxation approach to transform it into a convex problem. Finally, the effectiveness of our approach is verified based on the prototype implementation by using the commodity Wi-Fi devices. The experimental results show that our approach can achieve the median error 0.7 m in the actual indoor environment.

  17. IGES transformer and NURBS in grid generation

    NASA Technical Reports Server (NTRS)

    Yu, Tzu-Yi; Soni, Bharat K.

    1993-01-01

    In the field of Grid Generation and the CAD/CAM, there are numerous geometry output formats which require the designer to spend a great deal of time manipulating geometrical entities in order to achieve a useful sculptured geometrical description for grid generation. Also in this process, there is a danger of losing fidelity of the geometry under consideration. This stresses the importance of a standard geometry definition for the communication link between varying CAD/CAM and grid system. The IGES (Initial Graphics Exchange Specification) file is a widely used communication between CAD/CAM and the analysis tools. The scientists at NASA Research Centers - including NASA Ames, NASA Langley, NASA Lewis, NASA Marshall - have recognized this importance and, therefore, in 1992 they formed the committee of the 'NASA-IGES' which is the subset of the standard IGES. This committee stresses the importance and encourages the CFD community to use the standard IGES file for the interface between the CAD/CAM and CFD analysis. Also, two of the IGES entities -- the NURBS Curve (Entity 126) and NURBS Surface (Entity 128) -- which have many useful geometric properties -- like the convex hull property, local control property and affine invariance, also widely utilized analytical geometries can be accurately represented using NURBS. This is important in today grid generation tools because of the emphasis of the interactive design. To satisfy the geometry transformation between the CAD/CAM system and Grid Generation field, the CAGI (Computer Aided Geometry Design) developed, which include the Geometry Transformation, Geometry Manipulation and Geometry Generation as well as the user interface. This paper will present the successful development IGES file transformer and application of NURBS definition in the grid generation.

  18. Convex Curved Crystal Spectograph for Pulsed Plasma Sources.

    DTIC Science & Technology

    The geometry of a convex curved crystal spectrograph as applied to pulsed plasma sources is presented. Also presented are data from the dense plasma focus with particular emphasis on the absolute intensity of line radiations.

  19. Optimal boundary regularity for a singular Monge-Ampère equation

    NASA Astrophysics Data System (ADS)

    Jian, Huaiyu; Li, You

    2018-06-01

    In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.

  20. Compliant tactile sensor that delivers a force vector

    NASA Technical Reports Server (NTRS)

    Torres-Jara, Eduardo (Inventor)

    2010-01-01

    Tactile Sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector. The applied force vector has three components to establish the direction and magnitude of an applied force. The compliant convex surface defines a dome with a hollow interior and has a linear relation between displacement and load including a magnet disposed substantially at the center of the dome above a sensor array that responds to magnetic field intensity.

  1. Convexity of level lines of Martin functions and applications

    NASA Astrophysics Data System (ADS)

    Gallagher, A.-K.; Lebl, J.; Ramachandran, K.

    2018-01-01

    Let Ω be an unbounded domain in R× Rd. A positive harmonic function u on Ω that vanishes on the boundary of Ω is called a Martin function. In this note, we show that, when Ω is convex, the superlevel sets of a Martin function are also convex. As a consequence we obtain that if in addition Ω has certain symmetry with respect to the t-axis, and partial Ω is sufficiently flat, then the maximum of any Martin function along a slice Ω \\cap ({t}× R^d) is attained at (t, 0).

  2. The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons

    NASA Astrophysics Data System (ADS)

    Kweon, Jae Ryong

    2017-03-01

    In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.

  3. Paraboloid-aspheric lenses free of spherical aberration

    NASA Astrophysics Data System (ADS)

    Lozano-Rincón, Ninfa del C.; Valencia-Estrada, Juan Camilo

    2017-07-01

    A method to design singlet paraboloid-aspheric lenses free of all orders of spherical aberration with maximum aperture is described. This work includes all parametric formulas to describe paraboloid-aspheric or aspheric-paraboloid lenses for any finite conjugated planes. It also includes the Schwarzchilds approximations (which can be used to calculate one rigorous propagation of light waves in physic optics) to design convex paraboloid-aspheric lenses for imaging an object at infinity, with explicit formulas to calculate thicknesses easily. The results were verified with software through ray tracing.

  4. Asymptotic stability estimates near an equilibrium point

    NASA Astrophysics Data System (ADS)

    Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2017-07-01

    We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yousong, E-mail: yousong.luo@rmit.edu.au

    This paper deals with a class of optimal control problems governed by an initial-boundary value problem of a parabolic equation. The case of semi-linear boundary control is studied where the control is applied to the system via the Wentzell boundary condition. The differentiability of the state variable with respect to the control is established and hence a necessary condition is derived for the optimal solution in the case of both unconstrained and constrained problems. The condition is also sufficient for the unconstrained convex problems. A second order condition is also derived.

  6. Asteroid models from the Lowell photometric database

    NASA Astrophysics Data System (ADS)

    Ďurech, J.; Hanuš, J.; Oszkiewicz, D.; Vančo, R.

    2016-03-01

    Context. Information about shapes and spin states of individual asteroids is important for the study of the whole asteroid population. For asteroids from the main belt, most of the shape models available now have been reconstructed from disk-integrated photometry by the lightcurve inversion method. Aims: We want to significantly enlarge the current sample (~350) of available asteroid models. Methods: We use the lightcurve inversion method to derive new shape models and spin states of asteroids from the sparse-in-time photometry compiled in the Lowell Photometric Database. To speed up the time-consuming process of scanning the period parameter space through the use of convex shape models, we use the distributed computing project Asteroids@home, running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. This way, the period-search interval is divided into hundreds of smaller intervals. These intervals are scanned separately by different volunteers and then joined together. We also use an alternative, faster, approach when searching the best-fit period by using a model of triaxial ellipsoid. By this, we can independently confirm periods found with convex models and also find rotation periods for some of those asteroids for which the convex-model approach gives too many solutions. Results: From the analysis of Lowell photometric data of the first 100 000 numbered asteroids, we derived 328 new models. This almost doubles the number of available models. We tested the reliability of our results by comparing models that were derived from purely Lowell data with those based on dense lightcurves, and we found that the rate of false-positive solutions is very low. We also present updated plots of the distribution of spin obliquities and pole ecliptic longitudes that confirm previous findings about a non-uniform distribution of spin axes. However, the models reconstructed from noisy sparse data are heavily biased towards more elongated bodies with high lightcurve amplitudes. Conclusions: The Lowell Photometric Database is a rich and reliable source of information about the spin states of asteroids. We expect hundreds of other asteroid models for asteroids with numbers larger than 100 000 to be derivable from this data set. More models will be able to be reconstructed when Lowell data are merged with other photometry. Tables 1 and 2 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A48

  7. Worst case estimation of homology design by convex analysis

    NASA Technical Reports Server (NTRS)

    Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.

    1998-01-01

    The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.

  8. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2014-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n -gon, our construction produces 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n ( n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called 'serendipity' elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed.

  9. Transient disturbance growth in flows over convex surfaces

    NASA Astrophysics Data System (ADS)

    Karp, Michael; Hack, M. J. Philipp

    2017-11-01

    Flows over curved surfaces occur in a wide range of applications including airfoils, compressor and turbine vanes as well as aerial, naval and ground vehicles. In most of these applications the surface has convex curvature, while concave surfaces are less common. Since monotonic boundary-layer flows over convex surfaces are exponentially stable, they have received considerably less attention than flows over concave walls which are destabilized by centrifugal forces. Non-modal mechanisms may nonetheless enable significant disturbance growth which can make the flow susceptible to secondary instabilities. A parametric investigation of the transient growth and secondary instability of flows over convex surfaces is performed. The specific conditions yielding the maximal transient growth and strongest instability are identified. The effect of wall-normal and spanwise inflection points on the instability process is discussed. Finally, the role and significance of additional parameters, such as the geometry and pressure gradient, is analyzed.

  10. Clearance detector and method for motion and distance

    DOEpatents

    Xavier, Patrick G [Albuquerque, NM

    2011-08-09

    A method for correct and efficient detection of clearances between three-dimensional bodies in computer-based simulations, where one or both of the volumes is subject to translation and/or rotations. The method conservatively determines of the size of such clearances and whether there is a collision between the bodies. Given two bodies, each of which is undergoing separate motions, the method utilizes bounding-volume hierarchy representations for the two bodies and, mappings and inverse mappings for the motions of the two bodies. The method uses the representations, mappings and direction vectors to determine the directionally furthest locations of points on the convex hulls of the volumes virtually swept by the bodies and hence the clearance between the bodies, without having to calculate the convex hulls of the bodies. The method includes clearance detection for bodies comprising convex geometrical primitives and more specific techniques for bodies comprising convex polyhedra.

  11. Anomalous dynamics triggered by a non-convex equation of state in relativistic flows

    NASA Astrophysics Data System (ADS)

    Ibáñez, J. M.; Marquina, A.; Serna, S.; Aloy, M. A.

    2018-05-01

    The non-monotonicity of the local speed of sound in dense matter at baryon number densities much higher than the nuclear saturation density (n0 ≈ 0.16 fm-3) suggests the possible existence of a non-convex thermodynamics which will lead to a non-convex dynamics. Here, we explore the rich and complex dynamics that an equation of state (EoS) with non-convex regions in the pressure-density plane may develop as a result of genuinely relativistic effects, without a classical counterpart. To this end, we have introduced a phenomenological EoS, the parameters of which can be restricted owing to causality and thermodynamic stability constraints. This EoS can be regarded as a toy model with which we may mimic realistic (and far more complex) EoSs of practical use in the realm of relativistic hydrodynamics.

  12. Intraday Seasonalities and Nonstationarity of Trading Volume in Financial Markets: Individual and Cross-Sectional Features

    PubMed Central

    Graczyk, Michelle B.; Duarte Queirós, Sílvio M.

    2016-01-01

    We study the intraday behaviour of the statistical moments of the trading volume of the blue chip equities that composed the Dow Jones Industrial Average index between 2003 and 2014. By splitting that time interval into semesters, we provide a quantitative account of the nonstationary nature of the intraday statistical properties as well. Explicitly, we prove the well-known ∪-shape exhibited by the average trading volume—as well as the volatility of the price fluctuations—experienced a significant change from 2008 (the year of the “subprime” financial crisis) onwards. That has resulted in a faster relaxation after the market opening and relates to a consistent decrease in the convexity of the average trading volume intraday profile. Simultaneously, the last part of the session has become steeper as well, a modification that is likely to have been triggered by the new short-selling rules that were introduced in 2007 by the Securities and Exchange Commission. The combination of both results reveals that the ∪ has been turning into a ⊔. Additionally, the analysis of higher-order cumulants—namely the skewness and the kurtosis—shows that the morning and the afternoon parts of the trading session are each clearly associated with different statistical features and hence dynamical rules. Concretely, we claim that the large initial trading volume is due to wayward stocks whereas the large volume during the last part of the session hinges on a cohesive increase of the trading volume. That dissimilarity between the two parts of the trading session is stressed in periods of higher uproar in the market. PMID:27812141

  13. Vapour-Phase Processes Control Liquid-Phase Isotope Profiles in Unsaturated Sphagnum Moss

    NASA Astrophysics Data System (ADS)

    Edwards, T. W.; Yi, Y.; Price, J. S.; Whittington, P. N.

    2009-05-01

    Seminal work in the early 1980s clearly established the basis for predicting patterns of heavy-isotope enrichment of pore waters in soils undergoing evaporation. A key feature of the process under steady-state conditions is the development of stable, convex-upward profiles whose shape is controlled by the balance between downward-diffusing heavy isotopologues concentrated by evaporative enrichment at the surface and the upward capillary flow of bulk water that maintains the evaporative flux. We conducted an analogous experiment to probe evaporation processes within 20-cm columns of unsaturated, living and dead (but undecomposed) Sphagnum moss evaporating under controlled conditions, while maintaining a constant water table. The experiment provided striking evidence of the importance of vapour-liquid mass and isotope exchange in the air-filled pores of the Sphagnum columns, as evidenced by the rapid development of hydrologic and isotopic steady-state within hours, rather than days, i.e., an order of magnitude faster than possible by liquid-phase processes alone. This is consistent with the notion that vapour-phase processes effectively "short-circuit" mass and isotope fluxes within the Sphagnum columns, as proposed also in recent characterizations of water dynamics in transpiring leaves. Additionally, advection-diffusion modelling of our results supports independent estimates of the effective liquid-phase diffusivities of the respective heavy water isotopologues, 2.380 x 10-5 cm2 s-1 for 1H1H18O and 2.415 x 10-5 cm2 s-1 for 1H2H16O, which are in notably good agreement with the "default" values that are typically assumed in soil and plant water studies.

  14. A systems biology approach to the analysis of subset-specific responses to lipopolysaccharide in dendritic cells.

    PubMed

    Hancock, David G; Shklovskaya, Elena; Guy, Thomas V; Falsafi, Reza; Fjell, Chris D; Ritchie, William; Hancock, Robert E W; Fazekas de St Groth, Barbara

    2014-01-01

    Dendritic cells (DCs) are critical for regulating CD4 and CD8 T cell immunity, controlling Th1, Th2, and Th17 commitment, generating inducible Tregs, and mediating tolerance. It is believed that distinct DC subsets have evolved to control these different immune outcomes. However, how DC subsets mount different responses to inflammatory and/or tolerogenic signals in order to accomplish their divergent functions remains unclear. Lipopolysaccharide (LPS) provides an excellent model for investigating responses in closely related splenic DC subsets, as all subsets express the LPS receptor TLR4 and respond to LPS in vitro. However, previous studies of the LPS-induced DC transcriptome have been performed only on mixed DC populations. Moreover, comparisons of the in vivo response of two closely related DC subsets to LPS stimulation have not been reported in the literature to date. We compared the transcriptomes of murine splenic CD8 and CD11b DC subsets after in vivo LPS stimulation, using RNA-Seq and systems biology approaches. We identified subset-specific gene signatures, which included multiple functional immune mediators unique to each subset. To explain the observed subset-specific differences, we used a network analysis approach. While both DC subsets used a conserved set of transcription factors and major signalling pathways, the subsets showed differential regulation of sets of genes that 'fine-tune' the network Hubs expressed in common. We propose a model in which signalling through common pathway components is 'fine-tuned' by transcriptional control of subset-specific modulators, thus allowing for distinct functional outcomes in closely related DC subsets. We extend this analysis to comparable datasets from the literature and confirm that our model can account for cell subset-specific responses to LPS stimulation in multiple subpopulations in mouse and man.

  15. Optimization with Fuzzy Data via Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Kosiński, Witold

    2010-09-01

    Order fuzzy numbers (OFN) that make possible to deal with fuzzy inputs quantitatively, exactly in the same way as with real numbers, have been recently defined by the author and his 2 coworkers. The set of OFN forms a normed space and is a partially ordered ring. The case when the numbers are presented in the form of step functions, with finite resolution, simplifies all operations and the representation of defuzzification functionals. A general optimization problem with fuzzy data is formulated. Its fitness function attains fuzzy values. Since the adjoint space to the space of OFN is finite dimensional, a convex combination of all linear defuzzification functionals may be used to introduce a total order and a real-valued fitness function. Genetic operations on individuals representing fuzzy data are defined.

  16. Display-wide influences on figure-ground perception: the case of symmetry.

    PubMed

    Mojica, Andrew J; Peterson, Mary A

    2014-05-01

    Past research has demonstrated that convex regions are increasingly likely to be perceived as figures as the number of alternating convex and concave regions in test displays increases. This region-number effect depends on both a small preexisting preference for convex over concave objects and the presence of scene characteristics (i.e., uniform fill) that allow the integration of the concave regions into a background object/surface. These factors work together to enable the percept of convex objects in front of a background. We investigated whether region-number effects generalize to another property, symmetry, whose effectiveness as a figure property has been debated. Observers reported which regions they perceived as figures in black-and-white displays with alternating symmetric/asymmetric regions. In Experiments 1 and 2, the displays had articulated outer borders that preserved the symmetry/asymmetry of the outermost regions. Region-number effects were not observed, although symmetric regions were perceived as figures more often than chance. We hypothesized that the articulated outer borders prevented fitting a background interpretation to the asymmetric regions. In Experiment 3, we used straight-edge framelike outer borders and observed region-number effects for symmetry equivalent to those observed for convexity. These results (1) show that display-wide information affects figure assignment at a border, (2) extend the evidence indicating that the ability to fit background as well as foreground interpretations is critical in figure assignment, (3) reveal that symmetry and convexity are equally effective figure cues and, (4) demonstrate that symmetry serves as a figural property only when it is close to fixation.

  17. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  18. Metastable neural dynamics mediates expectation

    NASA Astrophysics Data System (ADS)

    Mazzucato, Luca; La Camera, Giancarlo; Fontanini, Alfredo

    Sensory stimuli are processed faster when their presentation is expected compared to when they come as a surprise. We previously showed that, in multiple single-unit recordings from alert rat gustatory cortex, taste stimuli can be decoded faster from neural activity if preceded by a stimulus-predicting cue. However, the specific computational process mediating this anticipatory neural activity is unknown. Here, we propose a biologically plausible model based on a recurrent network of spiking neurons with clustered architecture. In the absence of stimulation, the model neural activity unfolds through sequences of metastable states, each state being a population vector of firing rates. We modeled taste stimuli and cue (the same for all stimuli) as two inputs targeting subsets of excitatory neurons. As observed in experiment, stimuli evoked specific state sequences, characterized in terms of `coding states', i.e., states occurring significantly more often for a particular stimulus. When stimulus presentation is preceded by a cue, coding states show a faster and more reliable onset, and expected stimuli can be decoded more quickly than unexpected ones. This anticipatory effect is unrelated to changes of firing rates in stimulus-selective neurons and is absent in homogeneous balanced networks, suggesting that a clustered organization is necessary to mediate the expectation of relevant events. Our results demonstrate a novel mechanism for speeding up sensory coding in cortical circuits. NIDCD K25-DC013557 (LM); NIDCD R01-DC010389 (AF); NSF IIS-1161852 (GL).

  19. Hillslope chemical weathering across Paraná, Brazil: a data mining-GIS hybrid approach

    USGS Publications Warehouse

    Iwashita, Fabio; Friedel, Michael J.; Filho, Carlos Roberto de Souza; Fraser, Stephen J.

    2011-01-01

    Self-organizing map (SOM) and geographic information system (GIS) models were used to investigate the nonlinear relationships associated with geochemical weathering processes at local (~100 km2) and regional (~50,000 km2) scales. The data set consisted of 1) 22 B-horizon soil variables: P, C, pH, Al, total acidity, Ca, Mg, K, total cation exchange capacity, sum of exchangeable bases, base saturation, Cu, Zn, Fe, B, S, Mn, gammaspectrometry (total count, potassium, thorium, and uranium) and magnetic susceptibility measures; and 2) six topographic variables: elevation, slope, aspect, hydrological accumulated flux, horizontal curvature and vertical curvature. It is characterized at 304 locations from a quasi-regular grid spaced about 24 km across the state of Paraná. This data base was split into two subsets: one for analysis and modeling (274 samples) and the other for validation (30 samples) purposes. The self-organizing map and clustering methods were used to identify and classify the relations among solid-phase chemical element concentrations and GIS derived topographic models. The correlation between elevation and k-means clusters related the relative position inside hydrologic macro basins, which was interpreted as an expression of the weathering process reaching a steady-state condition at the regional scale. Locally, the chemical element concentrations were related to the vertical curvature representing concave–convex hillslope features, where concave hillslopes with convergent flux tends to be a reducing environment and convex hillslopes with divergent flux, oxidizing environments. Stochastic cross validation demonstrated that the SOM produced unbiased classifications and quantified the relative amount of uncertainty in predictions. This work strengthens the hypothesis that, at B-horizon steady-state conditions, the terrain morphometry were linked with the soil geochemical weathering in a two-way dependent process: the topographic relief was a factor on environmental geochemistry while chemical weathering was for terrain feature delineation.

  20. Hillslope chemical weathering across Paraná, Brazil: A data mining-GIS hybrid approach

    NASA Astrophysics Data System (ADS)

    Iwashita, Fabio; Friedel, Michael J.; Filho, Carlos Roberto de Souza; Fraser, Stephen J.

    2011-09-01

    Self-organizing map (SOM) and geographic information system (GIS) models were used to investigate the nonlinear relationships associated with geochemical weathering processes at local (~100 km 2) and regional (~50,000 km 2) scales. The data set consisted of 1) 22 B-horizon soil variables: P, C, pH, Al, total acidity, Ca, Mg, K, total cation exchange capacity, sum of exchangeable bases, base saturation, Cu, Zn, Fe, B, S, Mn, gammaspectrometry (total count, potassium, thorium, and uranium) and magnetic susceptibility measures; and 2) six topographic variables: elevation, slope, aspect, hydrological accumulated flux, horizontal curvature and vertical curvature. It is characterized at 304 locations from a quasi-regular grid spaced about 24 km across the state of Paraná. This data base was split into two subsets: one for analysis and modeling (274 samples) and the other for validation (30 samples) purposes. The self-organizing map and clustering methods were used to identify and classify the relations among solid-phase chemical element concentrations and GIS derived topographic models. The correlation between elevation and k-means clusters related the relative position inside hydrologic macro basins, which was interpreted as an expression of the weathering process reaching a steady-state condition at the regional scale. Locally, the chemical element concentrations were related to the vertical curvature representing concave-convex hillslope features, where concave hillslopes with convergent flux tends to be a reducing environment and convex hillslopes with divergent flux, oxidizing environments. Stochastic cross validation demonstrated that the SOM produced unbiased classifications and quantified the relative amount of uncertainty in predictions. This work strengthens the hypothesis that, at B-horizon steady-state conditions, the terrain morphometry were linked with the soil geochemical weathering in a two-way dependent process: the topographic relief was a factor on environmental geochemistry while chemical weathering was for terrain feature delineation.

  1. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, Stanley; Tadmor, Eitan

    1988-01-01

    A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  2. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  3. SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING

    PubMed Central

    Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin

    2018-01-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594

  4. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  5. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  6. Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.

    PubMed

    Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-07-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.

  7. A description of an ‘obesogenic’ eating style that promotes higher energy intake and is associated with greater adiposity in 4.5 year-old children: Results from the GUSTO cohort

    PubMed Central

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R.; Sadananthan, Suresh Anand; Velan, S. Sendhil; Michael, Navin; Tint, Mya Thway; Fortier, Marielle Valerie; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P.; Meaney, Michael J.; Broekman, Birit F. P.; Lee, Yung Seng; Godfrey, Keith M.; Chong, Mary Foong Fong; Forde, Ciarán G.

    2017-01-01

    Recent findings confirm that faster eating rates support higher energy intakes within a meal and are associated with increased body weight and adiposity in children. The current study sought to identify the eating behaviours that underpin faster eating rates and energy intake in children, and to investigate their variations by weight status and other individual differences. Children (N=386) from the Growing Up in Singapore towards Healthy Outcomes (GUSTO) cohort took part in a video-recorded ad libitum lunch at 4.5 years of age to measure acute energy intake. Videos were coded for three eating behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min) and measures of eating microstructure: eating rate (g/min), total oral exposure (minutes), average bite size (g/bite), chews per gram, oral exposure per bite (seconds), total bites and proportion of active to total mealtime. Children’s BMIs were calculated and a subset of children underwent MRI scanning to establish abdominal adiposity. Children were grouped into faster and slower eaters, and into healthy and overweight groups to compare their eating behaviours. Results demonstrate that faster eating rates were correlated with larger average bite size (r=0.55, p<0.001), fewer chews per gram (r=-0.71, p<0.001) and shorter oral exposure time per bite (r=-0.25, p<0.001), and with higher energy intakes (r=0.61, p<0.001). Children with overweight and higher adiposity had faster eating rates (p<0.01) and higher energy intakes (p<0.01), driven by larger bite sizes (p<0.05). Eating behaviours varied by sex, ethnicity and early feeding regimes, partially attributable to BMI. We propose that these behaviours describe an ‘obesogenic eating style’ that is characterised by faster eating rates, achieved through larger bites, reduced chewing and shorter oral exposure time. This obesogenic eating style supports acute energy intake within a meal and is more prevalent among, though not exclusive to, children with overweight. PMID:28213204

  8. Minimizing the average distance to a closest leaf in a phylogenetic tree.

    PubMed

    Matsen, Frederick A; Gallagher, Aaron; McCoy, Connor O

    2013-11-01

    When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this article, we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Around Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, whereas PAM only gives a solution for the prespecified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, whereas the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.

  9. Lost in the supermarket: Quantifying the cost of partitioning memory sets in hybrid search.

    PubMed

    Boettcher, Sage E P; Drew, Trafton; Wolfe, Jeremy M

    2018-01-01

    The items on a memorized grocery list are not relevant in every aisle; for example, it is useless to search for the cabbage in the cereal aisle. It might be beneficial if one could mentally partition the list so only the relevant subset was active, so that vegetables would be activated in the produce section. In four experiments, we explored observers' abilities to partition memory searches. For example, if observers held 16 items in memory, but only eight of the items were relevant, would response times resemble a search through eight or 16 items? In Experiments 1a and 1b, observers were not faster for the partition set; however, they suffered relatively small deficits when "lures" (items from the irrelevant subset) were presented, indicating that they were aware of the partition. In Experiment 2 the partitions were based on semantic distinctions, and again, observers were unable to restrict search to the relevant items. In Experiments 3a and 3b, observers attempted to remove items from the list one trial at a time but did not speed up over the course of a block, indicating that they also could not limit their memory searches. Finally, Experiments 4a, 4b, 4c, and 4d showed that observers were able to limit their memory searches when a subset was relevant for a run of trials. Overall, observers appear to be unable or unwilling to partition memory sets from trial to trial, yet they are capable of restricting search to a memory subset that remains relevant for several trials. This pattern is consistent with a cost to switching between currently relevant memory items.

  10. Compliant tactile sensor for generating a signal related to an applied force

    NASA Technical Reports Server (NTRS)

    Torres-Jara, Eduardo (Inventor)

    2012-01-01

    Tactile sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector.

  11. Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Hu, Guoqiang

    2018-05-01

    In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.

  12. Convex Regression with Interpretable Sharp Partitions

    PubMed Central

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120

  13. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  14. The role of spinal concave–convex biases in the progression of idiopathic scoliosis

    PubMed Central

    Driscoll, Mark; Moreau, Alain; Villemure, Isabelle; Parent, Stefan

    2009-01-01

    Inadequate understanding of risk factors involved in the progression of idiopathic scoliosis restrains initial treatment to observation until the deformity shows signs of significant aggravation. The purpose of this analysis is to explore whether the concave–convex biases associated with scoliosis (local degeneration of the intervertebral discs, nucleus migration, and local increase in trabecular bone-mineral density of vertebral bodies) may be identified as progressive risk factors. Finite element models of a 26° right thoracic scoliotic spine were constructed based on experimental and clinical observations that included growth dynamics governed by mechanical stimulus. Stress distribution over the vertebral growth plates, progression of Cobb angles, and vertebral wedging were explored in models with and without the biases of concave–convex properties. The inclusion of the bias of concave–convex properties within the model both augmented the asymmetrical loading of the vertebral growth plates by up to 37% and further amplified the progression of Cobb angles and vertebral wedging by as much as 5.9° and 0.8°, respectively. Concave–convex biases are factors that influence the progression of scoliotic curves. Quantifying these parameters in a patient with scoliosis may further provide a better clinical assessment of the risk of progression. PMID:19130096

  15. Convex Formulations of Learning from Crowds

    NASA Astrophysics Data System (ADS)

    Kajino, Hiroshi; Kashima, Hisashi

    It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.

  16. Fast downscaled inverses for images compressed with M-channel lapped transforms.

    PubMed

    de Queiroz, R L; Eschbach, R

    1997-01-01

    Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.

  17. Perceptual representation and effectiveness of local figure–ground cues in natural contours

    PubMed Central

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure–ground segregation. Although previous studies have reported local contour features that evoke figure–ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure–ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure–ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure–ground perception with natural contours when the other cues coexist with equal probability including contradictory cases. PMID:26579057

  18. Perceptual representation and effectiveness of local figure-ground cues in natural contours.

    PubMed

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure-ground segregation. Although previous studies have reported local contour features that evoke figure-ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure-ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure-ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure-ground perception with natural contours when the other cues coexist with equal probability including contradictory cases.

  19. An integral equation formulation for the diffraction from convex plates and polyhedra.

    PubMed

    Asheim, Andreas; Svensson, U Peter

    2013-06-01

    A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.

  20. Mechanochemical spinodal decomposition: a phenomenological theory of phase transformations in multi-component, crystalline solids

    DOE PAGES

    Rudraraju, Shiva; Van der Ven, Anton; Garikipati, Krishna

    2016-06-10

    Here, we present a phenomenological treatment of diffusion-driven martensitic phase transformations in multi-component crystalline solids that arise from non-convex free energies in mechanical and chemical variables. The treatment describes diffusional phase transformations that are accompanied by symmetry-breaking structural changes of the crystal unit cell and reveals the importance of a mechanochemical spinodal, defined as the region in strain-composition space, where the free-energy density function is non-convex. The approach is relevant to phase transformations wherein the structural order parameters can be expressed as linear combinations of strains relative to a high-symmetry reference crystal. The governing equations describing mechanochemical spinodal decomposition aremore » variationally derived from a free-energy density function that accounts for interfacial energy via gradients of the rapidly varying strain and composition fields. A robust computational framework for treating the coupled, higher-order diffusion and nonlinear strain gradient elasticity problems is presented. Because the local strains in an inhomogeneous, transforming microstructure can be finite, the elasticity problem must account for geometric nonlinearity. An evaluation of available experimental phase diagrams and first-principles free energies suggests that mechanochemical spinodal decomposition should occur in metal hydrides such as ZrH 2-2c. The rich physics that ensues is explored in several numerical examples in two and three dimensions, and the relevance of the mechanism is discussed in the context of important electrode materials for Li-ion batteries and high-temperature ceramics.« less

  1. Design and measurement of a TE{sub 13} input converter for high order mode gyrotron travelling wave amplifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Liu, Guo, E-mail: liuguo@uestc.edu.cn; Shu, Guoxiang

    2016-03-15

    A technique to launch a circular TE{sub 13} mode to interact with the helical electron beam of a gyrotron travelling wave amplifier is proposed and verified by simulation and cold test in this paper. The high order (HOM) TE{sub 13} mode is excited by a broadband Y-type power divider with the aid of a cylindrical waveguide system. Using grooves and convex strips loaded at the lateral planes of the output cylindrical waveguide, the electric fields of the potential competing TE{sub 32} and TE{sub 71} modes are suppressed to allow the transmission of the dominant TE{sub 13} mode. The converter performancemore » for different structural dimensions of grooves and convex strips is studied in detail and excellent results have been achieved. Simulation predicts that the average transmission is ∼−1.8 dB with a 3 dB bandwidth of 7.2 GHz (91.5–98.7 GHz) and port reflection is less than −15 dB. The conversion efficiency to the TE{sub 32} and TE{sub 71} modes are, respectively, under −15 dB and −24 dB in the operating frequency band. Such an HOM converter operating at W-band has been fabricated and cold tested with the radiation boundary. Measurement from the vector network analyzer cold test and microwave simulations show a good reflection performance for the converter.« less

  2. Normative values and the effects of age, gender, and handedness on the Moberg Pick-Up Test.

    PubMed

    Amirjani, Nasim; Ashworth, Nigel L; Gordon, Tessa; Edwards, David C; Chan, K Ming

    2007-06-01

    The Moberg Pick-Up Test is a standardized test for assessing hand dexterity. Although reduction of sensation in the hand occurs with aging, the effect of age on a subject's performance of the Moberg Pick-Up Test has not been examined. The primary goal of this study was to examine the impact of aging and, secondarily, the impact of gender and handedness, on performance of the Moberg Pick-Up Test in 116 healthy subjects. The average time to complete each of the four subsets of the test was analyzed using the Kruskal-Wallis, Mann-Whitney U, and Wilcoxon signed-rank tests. The results show that hand dexterity of the subjects was significantly affected by age, with young subjects being the fastest and elderly subjects the slowest. Women accomplished the test faster than men, and task performance with the dominant hand was faster than with the non-dominant hand. Use of normative values established based on age and gender is a valuable objective tool to gauge hand function in patients with different neurologic disorders.

  3. Clustering, Seriation, and Subset Extraction of Confusion Data

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2006-01-01

    The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…

  4. Asteroid shape and spin statistics from convex models

    NASA Astrophysics Data System (ADS)

    Torppa, J.; Hentunen, V.-P.; Pääkkönen, P.; Kehusmaa, P.; Muinonen, K.

    2008-11-01

    We introduce techniques for characterizing convex shape models of asteroids with a small number of parameters, and apply these techniques to a set of 87 models from convex inversion. We present three different approaches for determining the overall dimensions of an asteroid. With the first technique, we measured the dimensions of the shapes in the direction of the rotation axis and in the equatorial plane and with the two other techniques, we derived the best-fit ellipsoid. We also computed the inertia matrix of the model shape to test how well it represents the target asteroid, i.e., to find indications of possible non-convex features or albedo variegation, which the convex shape model cannot reproduce. We used shape models for 87 asteroids to perform statistical analyses and to study dependencies between shape and rotation period, size, and taxonomic type. We detected correlations, but more data are required, especially on small and large objects, as well as slow and fast rotators, to reach a more thorough understanding about the dependencies. Results show, e.g., that convex models of asteroids are not that far from ellipsoids in root-mean-square sense, even though clearly irregular features are present. We also present new spin and shape solutions for Asteroids (31) Euphrosyne, (54) Alexandra, (79) Eurynome, (93) Minerva, (130) Elektra, (376) Geometria, (471) Papagena, and (776) Berbericia. We used a so-called semi-statistical approach to obtain a set of possible spin state solutions. The number of solutions depends on the abundancy of the data, which for Eurynome, Elektra, and Geometria was extensive enough for determining an unambiguous spin and shape solution. Data of Euphrosyne, on the other hand, provided a wide distribution of possible spin solutions, whereas the rest of the targets have two or three possible solutions.

  5. Structure, organization, and sequence of alpha satellite DNA from human chromosome 17: evidence for evolution by unequal crossing-over and an ancestral pentamer repeat shared with the human X chromosome.

    PubMed

    Waye, J S; Willard, H F

    1986-09-01

    The centromeric regions of all human chromosomes are characterized by distinct subsets of a diverse tandemly repeated DNA family, alpha satellite. On human chromosome 17, the predominant form of alpha satellite is a 2.7-kilobase-pair higher-order repeat unit consisting of 16 alphoid monomers. We present the complete nucleotide sequence of the 16-monomer repeat, which is present in 500 to 1,000 copies per chromosome 17, as well as that of a less abundant 15-monomer repeat, also from chromosome 17. These repeat units were approximately 98% identical in sequence, differing by the exclusion of precisely 1 monomer from the 15-monomer repeat. Homologous unequal crossing-over is suggested as a probable mechanism by which the different repeat lengths on chromosome 17 were generated, and the putative site of such a recombination event is identified. The monomer organization of the chromosome 17 higher-order repeat unit is based, in part, on tandemly repeated pentamers. A similar pentameric suborganization has been previously demonstrated for alpha satellite of the human X chromosome. Despite the organizational similarities, substantial sequence divergence distinguishes these subsets. Hybridization experiments indicate that the chromosome 17 and X subsets are more similar to each other than to the subsets found on several other human chromosomes. We suggest that the chromosome 17 and X alpha satellite subsets may be related components of a larger alphoid subfamily which have evolved from a common ancestral repeat into the contemporary chromosome-specific subsets.

  6. Directional Convexity and Finite Optimality Conditions.

    DTIC Science & Technology

    1984-03-01

    system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United

  7. Localized Multiple Kernel Learning A Convex Approach

    DTIC Science & Technology

    2016-11-22

    data. All the aforementioned approaches to localized MKL are formulated in terms of non-convex optimization problems, and deep the- oretical...learning. IEEE Transactions on Neural Networks, 22(3):433–446, 2011. Jingjing Yang, Yuanning Li, Yonghong Tian, Lingyu Duan, and Wen Gao. Group-sensitive

  8. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  9. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  10. Strain relaxation in convex-graded InxAl1-xAs (x = 0.05-0.79) metamorphic buffer layers grown by molecular beam epitaxy on GaAs(001)

    NASA Astrophysics Data System (ADS)

    Solov'ev, V. A.; Chernov, M. Yu; Baidakova, M. V.; Kirilenko, D. A.; Yagovkina, M. A.; Sitnikova, A. A.; Komissarova, T. A.; Kop'ev, P. S.; Ivanov, S. V.

    2018-01-01

    This paper presents a study of structural properties of InGaAs/InAlAs quantum well (QW) heterostructures with convex-graded InxAl1-xAs (x = 0.05-0.79) metamorphic buffer layers (MBLs) grown by molecular beam epitaxy on GaAs substrates. Mechanisms of elastic strain relaxation in the convex-graded MBLs were studied by the X-ray reciprocal space mapping combined with the data of spatially-resolved selected area electron diffraction implemented in a transmission electron microscope. The strain relaxation degree was approximated for the structures with different values of an In step-back. Strong contribution of the strain relaxation via lattice tilt in addition to the formation of the misfit dislocations has been observed for the convex-graded InAlAs MBL, which results in a reduced threading dislocation density in the QW region as compared to a linear-graded MBL.

  11. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  12. Liquid phase heteroepitaxial growth on convex substrate using binary phase field crystal model

    NASA Astrophysics Data System (ADS)

    Lu, Yanli; Zhang, Tinghui; Chen, Zheng

    2018-06-01

    The liquid phase heteroepitaxial growth on convex substrate is investigated with the binary phase field crystal (PFC) model. The paper aims to focus on the transformation of the morphology of epitaxial films on convex substrate with two different radiuses of curvature (Ω) as well as influences of substrate vicinal angles on films growth. It is found that films growth experience different stages on convex substrate with different radiuses of curvature (Ω). For Ω = 512 Δx , the process of epitaxial film growth includes four stages: island coupled with layer-by-layer growth, layer-by-layer growth, island coupled with layer-by-layer growth, layer-by-layer growth. For Ω = 1024 Δx , film growth only experience islands growth and layer-by-layer growth. Also, substrate vicinal angle (π) is an important parameter for epitaxial film growth. We find the film can grow well when π = 2° for Ω = 512 Δx , while the optimized film can be obtained when π = 4° for Ω = 512 Δx .

  13. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES

    PubMed Central

    RAND, ALEXANDER; GILLETTE, ANDREW; BAJAJ, CHANDRAJIT

    2013-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n-gon, our construction produces 2n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n(n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called ‘serendipity’ elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed. PMID:25301974

  14. Torsional deformity of apical vertebra in adolescent idiopathic scoliosis.

    PubMed

    Kotwicki, Tomasz; Napiontek, Marek

    2002-01-01

    CT scans of structural thoracic idiopathic scoliosis were reviewed in nine patients admitted to our department for scoliosis surgery. The apical vertebra scans were chosen and the following parameters were evaluated: 1) alpha angle formed by the axis of vertebra and the axis of spinous process 2) beta concave and beta convex angle between the spinous process and the left and right transverse process, respectively, 3) gamma concave and gamma convex angle between the axis of vertebra and the left and right transverse process, respectively, 4) the rotation angle to the sagittal plane. The constant deviation of the spinous process towards the convex side of the curve was observed. The vertebral body itself was distorted towards the concavity of the curve. The angle between the spinous process and the transverse process was smaller on the convex side of the curve. The torsional, intravertebral deformity of the apical vertebra was a factor acting in the direction opposite to the rotation, in the sense to reduce the deformity of the spine in idiopathic scoliosis.

  15. An axial temperature profile curvature criterion for the engineering of convex crystal growth interfaces in Bridgman systems

    NASA Astrophysics Data System (ADS)

    Peterson, Jeffrey H.; Derby, Jeffrey J.

    2017-06-01

    A unifying idea is presented for the engineering of convex melt-solid interface shapes in Bridgman crystal growth systems. Previous approaches to interface control are discussed with particular attention paid to the idea of a "booster" heater. Proceeding from the idea that a booster heater promotes a converging heat flux geometry and from the energy conservation equation, we show that a convex interface shape will naturally result when the interface is located in regions of the furnace where the axial thermal profile exhibits negative curvature, i.e., where d2 T / dz2 < 0 . This criterion is effective in explaining prior literature results on interface control and promising for the evaluation of new furnace designs. We posit that the negative curvature criterion may be applicable to the characterization of growth systems via temperature measurements in an empty furnace, providing insight about the potential for achieving a convex interface shape, without growing a crystal or conducting simulations.

  16. New Convex and Spherical Structures of Bare Boron Clusters

    NASA Astrophysics Data System (ADS)

    Boustani, Ihsan

    1997-10-01

    New stable structures of bare boron clusters can easily be obtained and constructed with the help of an "Aufbau Principle" suggested by a systematicab initioHF-SCF and direct CI study. It is concluded that boron cluster formation can be established by elemental units of pentagonal and hexagonal pyramids. New convex and small spherical clusters different from the classical known forms of boron crystal structures are obtained by a combination of both basic units. Convex structures simulate boron surfaces which can be considered as segments of open or closed spheres. Both convex clusters B16and B46have energies close to those of their conjugate quasi-planar clusters, which are relatively stable and can be considered to act as a calibration mark. The closed spherical clusters B12, B22, B32, and B42are less stable than the corresponding conjugated quasi-planar structures. As a consequence, highly stable spherical boron clusters can systematically be predicted when their conjugate quasi-planar clusters are determined and energies are compared.

  17. Scaling of Convex Hull Volume to Body Mass in Modern Primates, Non-Primate Mammals and Birds

    PubMed Central

    Brassey, Charlotte A.; Sellers, William I.

    2014-01-01

    The volumetric method of ‘convex hulling’ has recently been put forward as a mass prediction technique for fossil vertebrates. Convex hulling involves the calculation of minimum convex hull volumes (vol CH) from the complete mounted skeletons of modern museum specimens, which are subsequently regressed against body mass (M b) to derive predictive equations for extinct species. The convex hulling technique has recently been applied to estimate body mass in giant sauropods and fossil ratites, however the biomechanical signal contained within vol CH has remained unclear. Specifically, when vol CH scaling departs from isometry in a group of vertebrates, how might this be interpreted? Here we derive predictive equations for primates, non-primate mammals and birds and compare the scaling behaviour of M b to vol CH between groups. We find predictive equations to be characterised by extremely high correlation coefficients (r 2 = 0.97–0.99) and low mean percentage prediction error (11–20%). Results suggest non-primate mammals scale body mass to vol CH isometrically (b = 0.92, 95%CI = 0.85–1.00, p = 0.08). Birds scale body mass to vol CH with negative allometry (b = 0.81, 95%CI = 0.70–0.91, p = 0.011) and apparent density (vol CH/M b) therefore decreases with mass (r 2 = 0.36, p<0.05). In contrast, primates scale body mass to vol CH with positive allometry (b = 1.07, 95%CI = 1.01–1.12, p = 0.05) and apparent density therefore increases with size (r 2 = 0.46, p = 0.025). We interpret such departures from isometry in the context of the ‘missing mass’ of soft tissues that are excluded from the convex hulling process. We conclude that the convex hulling technique can be justifiably applied to the fossil record when a large proportion of the skeleton is preserved. However we emphasise the need for future studies to quantify interspecific variation in the distribution of soft tissues such as muscle, integument and body fat. PMID:24618736

  18. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  19. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  20. Impact of trailing edge shape on the wake and propulsive performance of pitching panels

    NASA Astrophysics Data System (ADS)

    Van Buren, T.; Floryan, D.; Brunner, D.; Senturk, U.; Smits, A. J.

    2017-01-01

    The effects of changing the trailing edge shape on the wake and propulsive performance of a pitching rigid panel are examined experimentally. The panel aspect ratio is AR=1 , and the trailing edges are symmetric chevron shapes with convex and concave orientations of varying degree. Concave trailing edges delay the natural vortex bending and compression of the wake, and the mean streamwise velocity field contains a single jet. Conversely, convex trailing edges promote wake compression and produce a quadfurcated wake with four jets. As the trailing edge shape changes from the most concave to the most convex, the thrust and efficiency increase significantly.

  1. A Convex Approach to Fault Tolerant Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.

  2. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  3. Relaxation in control systems of subdifferential type

    NASA Astrophysics Data System (ADS)

    Tolstonogov, A. A.

    2006-02-01

    In a separable Hilbert space we consider a control system with evolution operators that are subdifferentials of a proper convex lower semicontinuous function depending on time. The constraint on the control is given by a multivalued function with non-convex values that is lower semicontinuous with respect to the variable states. Along with the original system we consider the system in which the constraint on the control is the upper semicontinuous convex-valued regularization of the original constraint. We study relations between the solution sets of these systems. As an application we consider a control variational inequality. We give an example of a control system of parabolic type with an obstacle.

  4. Density of convex intersections and applications

    PubMed Central

    Rautenberg, C. N.; Rösel, S.

    2017-01-01

    In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301

  5. Reducing the duality gap in partially convex programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Correa, R.

    1994-12-31

    We consider the non-linear minimization program {alpha} = min{sub z{element_of}D, x{element_of}C}{l_brace}f{sub 0}(z, x) : f{sub i}(z, x) {<=} 0, i {element_of} {l_brace}1, ..., m{r_brace}{r_brace} where f{sub i}(z, {center_dot}) are convex functions, C is convex and D is compact. Following Ben-Tal, Eiger and Gershowitz we prove the existence of a partial dual program whose optimum is arbitrarily close to {alpha}. The idea, corresponds to the branching principle in Branch and Bound methods. We describe such a kind of algorithm for obtaining the desired partial dual.

  6. Ordered-subsets linkage analysis detects novel Alzheimer disease loci on chromosomes 2q34 and 15q22.

    PubMed

    Scott, William K; Hauser, Elizabeth R; Schmechel, Donald E; Welsh-Bohmer, Kathleen A; Small, Gary W; Roses, Allen D; Saunders, Ann M; Gilbert, John R; Vance, Jeffery M; Haines, Jonathan L; Pericak-Vance, Margaret A

    2003-11-01

    Alzheimer disease (AD) is a complex disorder characterized by a wide range, within and between families, of ages at onset of symptoms. Consideration of age at onset as a covariate in genetic-linkage studies may reduce genetic heterogeneity and increase statistical power. Ordered-subsets analysis includes continuous covariates in linkage analysis by rank ordering families by a covariate and summing LOD scores to find a subset giving a significantly increased LOD score relative to the overall sample. We have analyzed data from 336 markers in 437 multiplex (>/=2 sampled individuals with AD) families included in a recent genomic screen for AD loci. To identify genetic heterogeneity by age at onset, families were ordered by increasing and decreasing mean and minimum ages at onset. Chromosomewide significance of increases in the LOD score in subsets relative to the overall sample was assessed by permutation. A statistically significant increase in the nonparametric multipoint LOD score was observed on chromosome 2q34, with a peak LOD score of 3.2 at D2S2944 (P=.008) in 31 families with a minimum age at onset between 50 and 60 years. The LOD score in the chromosome 9p region previously linked to AD increased to 4.6 at D9S741 (P=.01) in 334 families with minimum age at onset between 60 and 75 years. LOD scores were also significantly increased on chromosome 15q22: a peak LOD score of 2.8 (P=.0004) was detected at D15S1507 (60 cM) in 38 families with minimum age at onset >/=79 years, and a peak LOD score of 3.1 (P=.0006) was obtained at D15S153 (62 cM) in 43 families with mean age at onset >80 years. Thirty-one families were contained in both 15q22 subsets, indicating that these results are likely detecting the same locus. There is little overlap in these subsets, underscoring the utility of age at onset as a marker of genetic heterogeneity. These results indicate that linkage to chromosome 9p is strongest in late-onset AD and that regions on chromosome 2q34 and 15q22 are linked to early-onset AD and very-late-onset AD, respectively.

  7. On the polarizability dyadics of electrically small, convex objects

    NASA Astrophysics Data System (ADS)

    Lakhtakia, Akhlesh

    1993-11-01

    This communication on the polarizability dyadics of electrically small objects of convex shapes has been prompted by a recent paper published by Sihvola and Lindell on the polarizability dyadic of an electrically gyrotropic sphere. A mini-review of recent work on polarizability dyadics is appended.

  8. On new fractional Hermite-Hadamard type inequalities for n-time differentiable quasi-convex functions and P-functions

    NASA Astrophysics Data System (ADS)

    Set, Erhan; Özdemir, M. Emin; Alan, E. Aykan

    2017-04-01

    In this article, by using the Hölder's inequality and power mean inequality the authors establish several inequalities of Hermite-Hadamard type for n- time differentiable quasi-convex functions and P- functions involving Riemann-Liouville fractional integrals.

  9. Nature and Consequences of Biological Reductionism for the Immunological Study of Infectious Diseases

    DOE PAGES

    Rivas, Ariel L.; Leitner, Gabriel; Jankowski, Mark D.; ...

    2017-05-31

    Evolution has conserved “economic” systems that perform many functions, faster or better, with less. For example, three to five leukocyte types protect from thousands of pathogens. In order to achieve so much with so little, biological systems combine their limited elements, creating complex structures. Yet, the prevalent research paradigm is reductionist. Focusing on infectious diseases, reductionist and non-reductionist views are here described. Furthermore, the literature indicates that reductionism is associated with information loss and errors, while non-reductionist operations can extract more information from the same data. When designed to capture one-to-many/many-to-one interactions—including the use of arrows that connect pairs ofmore » consecutive observations—non-reductionist (spatial–temporal) constructs eliminate data variability from all dimensions, except along one line, while arrows describe the directionality of temporal changes that occur along the line. To validate the patterns detected by non-reductionist operations, reductionist procedures are needed. Integrated (non-reductionist and reductionist) methods can (i) distinguish data subsets that differ immunologically and statistically; (ii) differentiate false-negative from -positive errors; (iii) discriminate disease stages; (iv) capture in vivo, multilevel interactions that consider the patient, the microbe, and antibiotic-mediated responses; and (v) assess dynamics. Integrated methods provide repeatable and biologically interpretable information.« less

  10. Offsite radiological consequence analysis for the bounding flammable gas accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CARRO, C.A.

    2003-03-19

    The purpose of this analysis is to calculate the offsite radiological consequence of the bounding flammable gas accident. DOE-STD-3009-94, ''Preparation Guide for U.S. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', requires the formal quantification of a limited subset of accidents representing a complete set of bounding conditions. The results of these analyses are then evaluated to determine if they challenge the DOE-STD-3009-94, Appendix A, ''Evaluation Guideline,'' of 25 rem total effective dose equivalent in order to identify and evaluate safety class structures, systems, and components. The bounding flammable gas accident is a detonation in a single-shell tank (SST).more » A detonation versus a deflagration was selected for analysis because the faster flame speed of a detonation can potentially result in a larger release of respirable material. As will be shown, the consequences of a detonation in either an SST or a double-shell tank (DST) are approximately equal. A detonation in an SST was selected as the bounding condition because the estimated respirable release masses are the same and because the doses per unit quantity of waste inhaled are generally greater for SSTs than for DSTs. Appendix A contains a DST analysis for comparison purposes.« less

  11. Nature and Consequences of Biological Reductionism for the Immunological Study of Infectious Diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivas, Ariel L.; Leitner, Gabriel; Jankowski, Mark D.

    Evolution has conserved “economic” systems that perform many functions, faster or better, with less. For example, three to five leukocyte types protect from thousands of pathogens. In order to achieve so much with so little, biological systems combine their limited elements, creating complex structures. Yet, the prevalent research paradigm is reductionist. Focusing on infectious diseases, reductionist and non-reductionist views are here described. Furthermore, the literature indicates that reductionism is associated with information loss and errors, while non-reductionist operations can extract more information from the same data. When designed to capture one-to-many/many-to-one interactions—including the use of arrows that connect pairs ofmore » consecutive observations—non-reductionist (spatial–temporal) constructs eliminate data variability from all dimensions, except along one line, while arrows describe the directionality of temporal changes that occur along the line. To validate the patterns detected by non-reductionist operations, reductionist procedures are needed. Integrated (non-reductionist and reductionist) methods can (i) distinguish data subsets that differ immunologically and statistically; (ii) differentiate false-negative from -positive errors; (iii) discriminate disease stages; (iv) capture in vivo, multilevel interactions that consider the patient, the microbe, and antibiotic-mediated responses; and (v) assess dynamics. Integrated methods provide repeatable and biologically interpretable information.« less

  12. High Order Schemes in Bats-R-US for Faster and More Accurate Predictions

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Toth, G.; Gombosi, T. I.

    2014-12-01

    BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.

  13. Technical Proceedings fo the Symposium on Military Information Systems Engineering (Panel 11 on Information Processing Technology, Defence Research Group).

    DTIC Science & Technology

    1991-12-27

    session. The following gives the flavour of the comments made. 17. Prototyping captures requirements. The prototype exercises requirements and allows the...can modify the data in a given sub-set. These sub-sets can be used as granules of database distribu- tion in order to simplify access control. (3

  14. Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dvijotham, Krishnamurthy; Low, Steven; Chertkov, Michael

    2015-01-12

    Power systems are undergoing unprecedented transformations with increased adoption of renewables and distributed generation, as well as the adoption of demand response programs. All of these changes, while making the grid more responsive and potentially more efficient, pose significant challenges for power systems operators. Conventional operational paradigms are no longer sufficient as the power system may no longer have big dispatchable generators with sufficient positive and negative reserves. This increases the need for tools and algorithms that can efficiently predict safe regions of operation of the power system. In this paper, we study energy functions as a tool to designmore » algorithms for various operational problems in power systems. These have a long history in power systems and have been primarily applied to transient stability problems. In this paper, we take a new look at power systems, focusing on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex in these variables. We show that this corresponds naturally with standard operational constraints imposed in power systems. We show that power of equations can be solved using this approach, as long as the solution lies within the convexity domain. We outline various desirable properties of solutions in the convexity domain and present simple numerical illustrations supporting our results.« less

  15. L 1-2 minimization for exact and stable seismic attenuation compensation

    NASA Astrophysics Data System (ADS)

    Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang

    2018-06-01

    Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.

  16. GTD analysis of airborne antennas radiating in the presence of lossy dielectric layers

    NASA Technical Reports Server (NTRS)

    Rojas-Teran, R. G.; Burnside, W. D.

    1981-01-01

    The patterns of monopole or aperture antennas mounted on a perfectly conducting convex surface radiating in the presence of a dielectric or metal plate are computed. The geometrical theory of diffraction is used to analyze the radiating system and extended here to include diffraction by flat dielectric slabs. Modified edge diffraction coefficients valid for wedges whose walls are lossy or lossless thin dielectric or perfectly conducting plates are developed. The width of the dielectric plates cannot exceed a quarter of a wavelength in free space, and the interior angle of the wedge is assumed to be close to 0 deg or 180 deg. Systematic methods for computing the individual components of the total high frequency field are discussed. The accuracy of the solutions is demonstrated by comparisons with measured results, where a 2 lambda by 4 lambda prolate spheroid is used as the convex surface. A jump or kink appears in the calculated pattern when higher order terms that are important are not included in the final solution. The most immediate application of the results presented here is in the modelling of structures such as aircraft which are composed of nonmetallic parts that play a significant role in the pattern.

  17. Distortion outage minimization in Nakagami fading using limited feedback

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Hong; Dey, Subhrakanti

    2011-12-01

    We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.

  18. The equation of state of Song and Mason applied to fluorine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslami, H.; Boushehri, A.

    1999-03-01

    An analytical equation of state is applied to calculate the compressed and saturation thermodynamic properties of fluorine. The equation of state is that of Song and Mason. It is based on a statistical mechanical perturbation theory of hard convex bodies and is a fifth-order polynomial in the density. There exist three temperature-dependent parameters: the second virial coefficient, an effective molecular volume, and a scaling factor for the average contact pair distribution function of hard convex bodies. The temperature-dependent parameters can be calculated if the intermolecular pair potential is known. However, the equation is usable with much less input than themore » full intermolecular potential, since the scaling factor and effective volume are nearly universal functions when expressed in suitable reduced units. The equation of state has been applied to calculate thermodynamic parameters including the critical constants, the vapor pressure curve, the compressibility factor, the fugacity coefficient, the enthalpy, the entropy, the heat capacity at constant pressure, the ratio of heat capacities, the Joule-Thomson coefficient, the Joule-Thomson inversion curve, and the speed of sound for fluorine. The agreement with experiment is good.« less

  19. Sparse signals recovered by non-convex penalty in quasi-linear systems.

    PubMed

    Cui, Angang; Li, Haiyang; Wen, Meng; Peng, Jigen

    2018-01-01

    The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the [Formula: see text]-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function [Formula: see text] in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem [Formula: see text] for all [Formula: see text]. With the change of parameter [Formula: see text], our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical experiments show that our method performs much better than some state-of-the-art methods.

  20. Modelling the role of surface stress on the kinetics of tissue growth in confined geometries.

    PubMed

    Gamsjäger, E; Bidan, C M; Fischer, F D; Fratzl, P; Dunlop, J W C

    2013-03-01

    In a previous paper we presented a theoretical framework to describe tissue growth in confined geometries based on the work of Ambrosi and Guillou [Ambrosi D, Guillou A. Growth and dissipation in biological tissues. Cont Mech Thermodyn 2007;19:245-51]. A thermodynamically consistent eigenstrain rate for growth was derived using the concept of configurational forces and used to investigate growth in holes of cylindrical geometries. Tissue growing from concave surfaces can be described by a model based on this theory. However, an apparently asymmetric behaviour between growth from convex and concave surfaces has been observed experimentally, but is not predicted by this model. This contradiction is likely to be due to the presence of contractile tensile stresses produced by cells near the tissue surface. In this contribution we extend the model in order to couple tissue growth to the presence of a surface stress. This refined growth model is solved for two geometries, within a cylindrical hole and on the outer surface of a cylinder, thus demonstrating how surface stress may indeed inhibit growth on convex substrates. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  1. On Using Homogeneous Polynomials To Design Anisotropic Yield Functions With Tension/Compression Symmetry/Assymetry

    NASA Astrophysics Data System (ADS)

    Soare, S.; Yoon, J. W.; Cazacu, O.

    2007-05-01

    With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stress states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.

  2. Position-based coding and convex splitting for private communication over quantum channels

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2017-10-01

    The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.

  3. Hyperbolicity of the Nonlinear Models of Maxwell's Equations

    NASA Astrophysics Data System (ADS)

    Serre, Denis

    . We consider the class of nonlinear models of electromagnetism that has been described by Coleman & Dill [7]. A model is completely determined by its energy density W(B,D). Viewing the electromagnetic field (B,D) as a 3×2 matrix, we show that polyconvexity of W implies the local well-posedness of the Cauchy problem within smooth functions of class Hs with s>1+d/2. The method follows that designed by Dafermos in his book [9] in the context of nonlinear elasticity. We use the fact that B×D is a (vectorial, non-convex) entropy, and we enlarge the system from 6 to 9 equations. The resulting system admits an entropy (actually the energy) that is convex. Since the energy conservation law does not derive from the system of conservation laws itself (Faraday's and Ampère's laws), but also needs the compatibility relations divB=divD=0 (the latter may be relaxed in order to take into account electric charges), the energy density is not an entropy in the classical sense. Thus the system cannot be symmetrized, strictly speaking. However, we show that the structure is close enough to symmetrizability, so that the standard estimates still hold true.

  4. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  5. Cellulose nanomaterials as additives for cementitious materials

    Treesearch

    Tengfei Fu; Robert J. Moon; Pablo Zavatierri; Jeffrey Youngblood; William Jason Weiss

    2017-01-01

    Cementitious materials cover a very broad area of industries/products (buildings, streets and highways, water and waste management, and many others; see Fig. 20.1). Annual production of cements is on the order of 4 billion metric tons [2]. In general these industries want stronger, cheaper, more durable concrete, with faster setting times, faster rates of strength gain...

  6. Linear Controller Design: Limits of Performance

    DTIC Science & Technology

    1991-01-01

    where a sensor should be placed eg where an accelerometer is to be positioned on an aircraft or where a strain gauge is placed along a beam The...309 VIII CONTENTS 14 Special Algorithms for Convex Optimization 311 Notation and Problem Denitions...311 On Algorithms for Convex Optimization 312 CuttingPlane Algorithms

  7. A Reynolds stress model for near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Durbin, P. A.

    1993-01-01

    The paper formulates a tensorially consistent near-wall second-order closure model. Redistributive terms in the Reynolds stress equations are modeled by an elliptic relaxation equation in order to represent strongly nonhomogeneous effects produced by the presence of walls; this replaces the quasi-homogeneous algebraic models that are usually employed, and avoids the need for ad hoc damping functions. The model is solved for channel flow and boundary layers with zero and adverse pressure gradients. Good predictions of Reynolds stress components, mean flow, skin friction, and displacement thickness are obtained in various comparisons to experimental and direct numerical simulation data. The model is also applied to a boundary layer flowing along a wall with a 90-deg, constant-radius, convex bend.

  8. Large-Scale, Lineage-Specific Expansion of a Bric-a-Brac/Tramtrack/Broad Complex Ubiquitin-Ligase Gene Family in Rice[W

    PubMed Central

    Gingerich, Derek J.; Hanada, Kousuke; Shiu, Shin-Han; Vierstra, Richard D.

    2007-01-01

    Selective ubiquitination of proteins is directed by diverse families of ubiquitin-protein ligases (or E3s) in plants. One important type uses Cullin-3 as a scaffold to assemble multisubunit E3 complexes containing one of a multitude of bric-a-brac/tramtrack/broad complex (BTB) proteins that function as substrate recognition factors. We previously described the 80-member BTB gene superfamily in Arabidopsis thaliana. Here, we describe the complete BTB superfamily in rice (Oryza sativa spp japonica cv Nipponbare) that contains 149 BTB domain–encoding genes and 43 putative pseudogenes. Amino acid sequence comparisons of the rice and Arabidopsis superfamilies revealed a near equal repertoire of putative substrate recognition module types. However, phylogenetic comparisons detected numerous gene duplication and/or loss events since the rice and Arabidopsis BTB lineages split, suggesting possible functional specialization within individual BTB families. In particular, a major expansion and diversification of a subset of BTB proteins containing Meprin and TRAF homology (MATH) substrate recognition sites was evident in rice and other monocots that likely occurred following the monocot/dicot split. The MATH domain of a subset appears to have evolved significantly faster than those in a smaller core subset that predates flowering plants, suggesting that the substrate recognition module in many monocot MATH-BTB E3s are diversifying to ubiquitinate a set of substrates that are themselves rapidly changing. Intriguing possibilities include pathogen proteins attempting to avoid inactivation by the monocot host. PMID:17720868

  9. Experimental evaluation of the certification-trail method

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.; Itoh, Mamoru; Smith, Warren W.; Kay, Jonathan S.

    1993-01-01

    Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. A comprehensive attempt to assess experimentally the performance and overall value of the method is reported. The method is applied to algorithms for the following problems: huffman tree, shortest path, minimum spanning tree, sorting, and convex hull. Our results reveal many cases in which an approach using certification-trails allows for significantly faster overall program execution time than a basic time redundancy-approach. Algorithms for the answer-validation problem for abstract data types were also examined. This kind of problem provides a basis for applying the certification-trail method to wide classes of algorithms. Answer-validation solutions for two types of priority queues were implemented and analyzed. In both cases, the algorithm which performs answer-validation is substantially faster than the original algorithm for computing the answer. Next, a probabilistic model and analysis which enables comparison between the certification-trail method and the time-redundancy approach were presented. The analysis reveals some substantial and sometimes surprising advantages for ther certification-trail method. Finally, the work our group performed on the design and implementation of fault injection testbeds for experimental analysis of the certification trail technique is discussed. This work employs two distinct methodologies, software fault injection (modification of instruction, data, and stack segments of programs on a Sun Sparcstation ELC and on an IBM 386 PC) and hardware fault injection (control, address, and data lines of a Motorola MC68000-based target system pulsed at logical zero/one values). Our results indicate the viability of the certification trail technique. It is also believed that the tools developed provide a solid base for additional exploration.

  10. Spacecraft attitude determination using a second-order nonlinear filter

    NASA Technical Reports Server (NTRS)

    Vathsal, S.

    1987-01-01

    The stringent attitude determination accuracy and faster slew maneuver requirements demanded by present-day spacecraft control systems motivate the development of recursive nonlinear filters for attitude estimation. This paper presents the second-order filter development for the estimation of attitude quaternion using three-axis gyro and star tracker measurement data. Performance comparisons have been made by computer simulation of system models and filter mechanization. It is shown that the second-order filter consistently performs better than the extended Kalman filter when the performance index of the root sum square estimation error of the quaternion vector is compared. The second-order filter identifies the gyro drift rates faster than the extended Kalman filter. The uniqueness of this algorithm is the online generation of the time-varying process and measurement noise covariance matrices, derived as a function or the process and measurement nonlinearity, respectively.

  11. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.

  12. Hardware Fault Simulator for Microprocessors

    NASA Technical Reports Server (NTRS)

    Hess, L. M.; Timoc, C. C.

    1983-01-01

    Breadboarded circuit is faster and more thorough than software simulator. Elementary fault simulator for AND gate uses three gates and shaft register to simulate stuck-at-one or stuck-at-zero conditions at inputs and output. Experimental results showed hardware fault simulator for microprocessor gave faster results than software simulator, by two orders of magnitude, with one test being applied every 4 microseconds.

  13. 78 FR 68833 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Combined Notice of Filings 1 Take notice... Wallingford--CONVEX Services CL&P Electric Rate Schedule FERC No. 583 to be effective 1/1/2014. Filed Date: 11... Company submits CMEEC--CONVEX Services First Revised Rate Schedule FERC No. 576 to be effective 1/1/2014...

  14. Convexities move because they contain matter.

    PubMed

    Barenholtz, Elan

    2010-09-22

    Figure-ground assignment to a contour is a fundamental stage in visual processing. The current paper introduces a novel, highly general dynamic cue to figure-ground assignment: "Convex Motion." Across six experiments, subjects showed a strong preference to assign figure and ground to a dynamically deforming contour such that the moving contour segment was convex rather than concave. Experiments 1 and 2 established the preference across two different kinds of deformational motion. Additional experiments determined that this preference was not due to fixation (Experiment 3) or attentional mechanisms (Experiment 4). Experiment 5 found a similar, but reduced bias for rigid-as opposed to deformational-motion, and Experiment 6 demonstrated that the phenomenon depends on the global motion of the effected contour. An explanation of this phenomenon is presented on the basis of typical natural deformational motion, which tends to involve convex contour projections that contain regions consisting of physical "matter," as opposed to concave contour indentations that contain empty space. These results highlight the fundamental relationship between figure and ground, perceived shape, and the inferred physical properties of an object.

  15. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  16. Efficient convex-elastic net algorithm to solve the Euclidean traveling salesman problem.

    PubMed

    Al-Mulhem, M; Al-Maghrabi, T

    1998-01-01

    This paper describes a hybrid algorithm that combines an adaptive-type neural network algorithm and a nondeterministic iterative algorithm to solve the Euclidean traveling salesman problem (E-TSP). It begins with a brief introduction to the TSP and the E-TSP. Then, it presents the proposed algorithm with its two major components: the convex-elastic net (CEN) algorithm and the nondeterministic iterative improvement (NII) algorithm. These two algorithms are combined into the efficient convex-elastic net (ECEN) algorithm. The CEN algorithm integrates the convex-hull property and elastic net algorithm to generate an initial tour for the E-TSP. The NII algorithm uses two rearrangement operators to improve the initial tour given by the CEN algorithm. The paper presents simulation results for two instances of E-TSP: randomly generated tours and tours for well-known problems in the literature. Experimental results are given to show that the proposed algorithm ran find the nearly optimal solution for the E-TSP that outperform many similar algorithms reported in the literature. The paper concludes with the advantages of the new algorithm and possible extensions.

  17. Determining Representative Elementary Volume For Multiple Petrophysical Parameters using a Convex Hull Analysis of Digital Rock Data

    NASA Astrophysics Data System (ADS)

    Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.

    2016-12-01

    Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.

  18. Detection of longitudinal ulcer using roughness value for computer aided diagnosis of Crohn's disease

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku

    2011-03-01

    The purpose of this paper is to present a new method to detect ulcers, which is one of the symptoms of Crohn's disease, from CT images. Crohn's disease is an inflammatory disease of the digestive tract. Crohn's disease commonly affects the small intestine. An optical or a capsule endoscope is used for small intestine examinations. However, these endoscopes cannot pass through intestinal stenosis parts in some cases. A CT image based diagnosis allows a physician to observe whole intestine even if intestinal stenosis exists. However, because of the complicated shape of the small and large intestines, understanding of shapes of the intestines and lesion positions are difficult in the CT image based diagnosis. Computer-aided diagnosis system for Crohn's disease having automated lesion detection is required for efficient diagnosis. We propose an automated method to detect ulcers from CT images. Longitudinal ulcers make rough surface of the small and large intestinal wall. The rough surface consists of combination of convex and concave parts on the intestinal wall. We detect convex and concave parts on the intestinal wall by a blob and an inverse-blob structure enhancement filters. A lot of convex and concave parts concentrate on roughed parts. We introduce a roughness value to differentiate convex and concave parts concentrated on the roughed parts from the other on the intestinal wall. The roughness value effectively reduces false positives of ulcer detection. Experimental results showed that the proposed method can detect convex and concave parts on the ulcers.

  19. “Soft that molds the hard:” Geometric morphometry of lateral atlantoaxial joints focusing on the role of cartilage in changing the contour of bony articular surfaces

    PubMed Central

    Prasad, Prashant Kumar; Salunke, Pravin; Sahni, Daisy; Kalra, Parveen

    2017-01-01

    Purpose: The existing literature on lateral atlantoaxial joints is predominantly on bony facets and is unable to explain various C1-2 motions observed. Geometric morphometry of facets would help us in understanding the role of cartilages in C1-2 biomechanics/kinematics. Objective: Anthropometric measurements (bone and cartilage) of the atlantoaxial joint and to assess the role of cartilages in joint biomechanics. Materials and Methods: The authors studied 10 cadaveric atlantoaxial lateral joints with the articular cartilage in situ and after removing it, using three-dimensional laser scanner. The data were compared using geometric morphometry with emphasis on surface contours of articulating surfaces. Results: The bony inferior articular facet of atlas is concave in both sagittal and coronal plane. The bony superior articular facet of axis is convex in sagittal plane and is concave (laterally) and convex medially in the coronal plane. The bony articulating surfaces were nonconcordant. The articular cartilages of both C1 and C2 are biconvex in both planes and are thicker than the concavities of bony articulating surfaces. Conclusion: The biconvex structure of cartilage converts the surface morphology of C1-C2 bony facets from concave on concavo-convex to convex on convex. This reduces the contact point making the six degrees of freedom of motion possible and also makes the joint gyroscopic. PMID:29403249

  20. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.

  1. Redefining Myeloid Cell Subsets in Murine Spleen

    PubMed Central

    Hey, Ying-Ying; Tan, Jonathan K. H.; O’Neill, Helen C.

    2016-01-01

    Spleen is known to contain multiple dendritic and myeloid cell subsets, distinguishable on the basis of phenotype, function and anatomical location. As a result of recent intensive flow cytometric analyses, splenic dendritic cell (DC) subsets are now better characterized than other myeloid subsets. In order to identify and fully characterize a novel splenic subset termed “L-DC” in relation to other myeloid cells, it was necessary to investigate myeloid subsets in more detail. In terms of cell surface phenotype, L-DC were initially characterized as a CD11bhiCD11cloMHCII−Ly6C−Ly6G− subset in murine spleen. Their expression of CD43, lack of MHCII, and a low level of CD11c was shown to best differentiate L-DC by phenotype from conventional DC subsets. A complete analysis of all subsets in spleen led to the classification of CD11bhiCD11cloMHCII−Ly6CloLy6G− cells as monocytes expressing CX3CR1, CD43 and CD115. Siglec-F expression was used to identify a specific eosinophil population, distinguishable from both Ly6Clo and Ly6Chi monocytes, and other DC subsets. L-DC were characterized as a clear subset of CD11bhiCD11cloMHCII−Ly6C−Ly6G− cells, which are CD43+, Siglec-F− and CD115−. Changes in the prevalence of L-DC compared to other subsets in spleens of mutant mice confirmed the phenotypic distinction between L-DC, cDC and monocyte subsets. L-DC development in vivo was shown to occur independently of the BATF3 transcription factor that regulates cDC development, and also independently of the FLT3L and GM-CSF growth factors which drive cDC and monocyte development, so distinguishing L-DC from these commonly defined cell types. PMID:26793192

  2. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    PubMed

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Effect of dental arch convexity and type of archwire on frictional forces.

    PubMed

    Fourie, Zacharias; Ozcan, Mutlu; Sandham, Andrew

    2009-07-01

    Friction measurements in orthodontics are often derived from models by using brackets placed on flat models with various straight wires. Dental arches are convex in some areas. The objectives of this study were to compare the frictional forces generated in conventional flat and convex dental arch setups, and to evaluate the effect of different archwires on friction in both dental arch models. Two stainless steel models were designed and manufactured simulating flat and convex maxillary right buccal dental arches. Five stainless steel brackets from the maxillary incisor to the second premolar (slot size, 0.22 in, Victory, 3M Unitek, Monrovia, Calif) and a first molar tube were aligned and clamped on the metal model at equal distances of 6 mm. Four kinds of orthodontic wires were tested: (1) A. J. Wilcock Australian wire (0.016 in, G&H Wire, Hannover, Germany); and (2) 0.016 x 0.022 in, (3) 0.018 x 0.022 in, and (4) 0.019 x 0.025 in (3M Unitek GmbH, Seefeld, Germany). Gray elastomeric modules (Power O 110, Ormco, Glendora, Calif) were used for ligation. Friction tests were performed in the wet state with artificial saliva lubrication and by pulling 5 mm of the whole length of the archwire. Six measurements were made from each bracket-wire combination, and each test was performed with new combinations of materials for both arch setups (n = 48, 6 per group) in a universal testing machine (crosshead speed: 20 mm/min). Significant effects of arch model (P = 0.0000) and wire types (P = 0.0000) were found. The interaction term between the tested factors was not significant (P = 0.1581) (2-way ANOVA and Tukey test). Convex models resulted in significantly higher frictional forces (1015-1653 g) than flat models (680-1270 g) (P <0.05). In the flat model, significantly lower frictional forces were obtained with wire types 1 (679 g) and 3 (1010 g) than with types 2 (1146 g) and 4 (1270 g) (P <0.05). In the convex model, the lowest friction was obtained with wire types 1 (1015 g) and 3 (1142 g) (P >0.05). Type 1 wire tended to create the least overall friction in both flat and convex dental arch simulation models.

  4. Effector CD8 T cells dedifferentiate into long-lived memory cells.

    PubMed

    Youngblood, Ben; Hale, J Scott; Kissick, Haydn T; Ahn, Eunseon; Xu, Xiaojin; Wieland, Andreas; Araki, Koichi; West, Erin E; Ghoneim, Hazem E; Fan, Yiping; Dogra, Pranay; Davis, Carl W; Konieczny, Bogumila T; Antia, Rustom; Cheng, Xiaodong; Ahmed, Rafi

    2017-12-21

    Memory CD8 T cells that circulate in the blood and are present in lymphoid organs are an essential component of long-lived T cell immunity. These memory CD8 T cells remain poised to rapidly elaborate effector functions upon re-exposure to pathogens, but also have many properties in common with naive cells, including pluripotency and the ability to migrate to the lymph nodes and spleen. Thus, memory cells embody features of both naive and effector cells, fuelling a long-standing debate centred on whether memory T cells develop from effector cells or directly from naive cells. Here we show that long-lived memory CD8 T cells are derived from a subset of effector T cells through a process of dedifferentiation. To assess the developmental origin of memory CD8 T cells, we investigated changes in DNA methylation programming at naive and effector cell-associated genes in virus-specific CD8 T cells during acute lymphocytic choriomeningitis virus infection in mice. Methylation profiling of terminal effector versus memory-precursor CD8 T cell subsets showed that, rather than retaining a naive epigenetic state, the subset of cells that gives rise to memory cells acquired de novo DNA methylation programs at naive-associated genes and became demethylated at the loci of classically defined effector molecules. Conditional deletion of the de novo methyltransferase Dnmt3a at an early stage of effector differentiation resulted in reduced methylation and faster re-expression of naive-associated genes, thereby accelerating the development of memory cells. Longitudinal phenotypic and epigenetic characterization of the memory-precursor effector subset of virus-specific CD8 T cells transferred into antigen-free mice revealed that differentiation to memory cells was coupled to erasure of de novo methylation programs and re-expression of naive-associated genes. Thus, epigenetic repression of naive-associated genes in effector CD8 T cells can be reversed in cells that develop into long-lived memory CD8 T cells while key effector genes remain demethylated, demonstrating that memory T cells arise from a subset of fate-permissive effector T cells.

  5. Reflections From a Fresnel Lens

    ERIC Educational Resources Information Center

    Keeports, David

    2005-01-01

    Reflection of light by a convex Fresnel lens gives rise to two distinct images. A highly convex inverted real reflective image forms on the object side of the lens, while an upright virtual reflective image forms on the opposite side of the lens. I describe here a set of laser experiments performed upon a Fresnel lens. These experiments provide…

  6. Influence of crucible support and radial heating on the interface shape during vertical Bridgman GaAs growth

    NASA Astrophysics Data System (ADS)

    Koai, K.; Sonnenberg, K.; Wenzl, H.

    1994-03-01

    Crucible assembly in a vertical Bridgman furnace is investigated by a numerical finite element model with the aim to obtain convex interfaces during the growth of GaAs crystals. During the growth stage of the conic section, a new funnel shaped crucible support has been found more effective than the concentric cylinders design similar to that patented by AT & T in promoting interface convexity. For the growth stages of the constant diameter section, the furnace profile can be effectively modulated by localized radial heating at the gradient zone. With these two features being introduced into a new furnace design, it is shown numerically that enhancement of interface convexity can be achieved using the presently available crucible materials.

  7. [Objective accommodation parameters depending on accommodation task].

    PubMed

    Tarutta, E P; Tarasova, N A; Dolzhenko, O O

    2011-01-01

    62 myopic patients were examined to study objective accommodation parameters in different conditions of accommodation stimulus presenting (use of convex lenses). Objective accommodation response (OAR) was studied using binocular open-field autorefractometer in different conditions of stimulus presenting: complete myopia correction and adding of convex lenses with increasing power from +1.0 till +3.0 D. In 88,5% of children and adolescents showed significant decrease of OAR for 1,5-2,75D in 3.0D stimulus. Additional correction with convex lenses with increasing power leads to further reduce of accommodation response. As a result induced dynamic refraction in eye-lens system is lower than accommodation task. Only addition of +2,5D lense approximates it to required index of -3.0D.

  8. Laser backscattering analytical model of Doppler power spectra about rotating convex quadric bodies of revolution

    NASA Astrophysics Data System (ADS)

    Gong, YanJun; Wu, ZhenSen; Wang, MingJun; Cao, YunHua

    2010-01-01

    We propose an analytical model of Doppler power spectra in backscatter from arbitrary rough convex quadric bodies of revolution (whose lateral surface is a quadric) rotating around axes. In the global Cartesian coordinate system, the analytical model deduced is suitable for general convex quadric body of revolution. Based on this analytical model, the Doppler power spectra of cones, cylinders, paraboloids of revolution, and sphere-cones combination are proposed. We analyze numerically the influence of geometric parameters, aspect angle, wavelength and reflectance of rough surface of the objects on the broadened spectra because of the Doppler effect. This analytical solution may contribute to laser Doppler velocimetry, and remote sensing of ballistic missile that spin.

  9. Combat Orders: An Analysis of the Tactical Orders Process

    DTIC Science & Technology

    1990-06-01

    119 How Each System was Adapted to Doctrine .................. 120 T im e, the Critical Factor...Order ............................. 127 Decision Sequencing ... .......................... .............. ........................ 134 Adapting Tactical...overcentralization slows action and leads to inertia." 15 Agility Is the ability of friendly forces to act faster than the enemy. Initiative stresses the ability

  10. An overview of the NASA Langley Atmospheric Data Center: Online tools to effectively disseminate Earth science data products

    NASA Astrophysics Data System (ADS)

    Parker, L.; Dye, R. A.; Perez, J.; Rinsland, P.

    2012-12-01

    Over the past decade the Atmospheric Science Data Center (ASDC) at NASA Langley Research Center has archived and distributed a variety of satellite mission and aircraft campaign data sets. These datasets posed unique challenges to the user community at large due to the sheer volume and variety of the data and the lack of intuitive features in the order tools available to the investigator. Some of these data sets also lack sufficient metadata to provide rudimentary data discovery. To meet the needs of emerging users, the ASDC addressed issues in data discovery and delivery through the use of standards in data and access methods, and distribution through appropriate portals. The ASDC is currently undergoing a refresh of its webpages and Ordering Tools that will leverage updated collection level metadata in an effort to enhance the user experience. The ASDC is now providing search and subset capability to key mission satellite data sets. The ASDC has collaborated with Science Teams to accommodate prospective science users in the climate and modeling communities. The ASDC is using a common framework that enables more rapid development and deployment of search and subset tools that provide enhanced access features for the user community. Features of the Search and Subset web application enables a more sophisticated approach to selecting and ordering data subsets by parameter, date, time, and geographic area. The ASDC has also applied key practices from satellite missions to the multi-campaign aircraft missions executed for Earth Venture-1 and MEaSUReS

  11. How do I order MISR data?

    Atmospheric Science Data Center

    2017-10-12

    ... and archived at the NASA Langley Research Center Atmospheric Science Data Center (ASDC). A MISR Order and Customization Tool is ... Pool (an on-line, short-term data cache that provides a Web interface and FTP access). Specially subsetted and/or reformatted MISR data ...

  12. Photo-responsive surface topology in chiral nematic media

    NASA Astrophysics Data System (ADS)

    Liu, Danqing; Bastiaansen, Cees W. M.; Toonder, Jaap. M. J.; Broer, Dirk J.

    2012-03-01

    We report on the design and fabrication of 'smart surfaces' that exhibit dynamic changes in their surface topology in response to exposure to light. The principle is based on anisotropic geometric changes of a liquid crystal network upon a change of the molecular order parameter. The photomechanical property of the coating is induced by incorporating an azobenzene moiety into the liquid crystal network. The responsive surface topology consists of regions with two different types of molecular order: planar chiral-nematic areas and homeotropic. Under flood exposure with 365 nm light the surfaces deform from flat to one with a surface relief. The height of the relief structures is of the order of 1 um corresponding to strain difference of around 20%. Furthermore, we demonstrate surface reliefs can form either convex or concave structures upon exposure to UV light corresponding to the decrease or increase molecular order parameter, respectively, related to the isomeric state of the azobenzene crosslinker. The reversible deformation to the initial flat state occurs rapidly after removing the light source.

  13. Additive manufacturing of transparent fused quartz

    NASA Astrophysics Data System (ADS)

    Luo, Junjie; Hostetler, John M.; Gilbert, Luke; Goldstein, Jonathan T.; Urbas, Augustine M.; Bristow, Douglas A.; Landers, Robert G.; Kinzel, Edward C.

    2018-04-01

    This paper investigates a filament-fed process for additive manufacturing (AM) of fused quartz. Glasses such as fused quartz have significant scientific and engineering applications, which include optics, communications, electronics, and hermetic seals. AM has several attractive benefits such as increased design freedom, faster prototyping, and lower processing costs for small production volumes. However, current research into glass AM has focused primarily on nonoptical applications. Fused quartz is studied here because of its desirability for use in high-quality optics due to its high transmissivity and thermal stability. Fused quartz filaments are fed into a CO2 laser-generated molten region, smoothly depositing material onto the workpiece. Spectroscopy and pyrometry are used to measure the thermal radiation incandescently emitted from the molten region. The effects of the laser power and scan speed are determined by measuring the morphology of single tracks. Thin walls are printed to study the effects of layer-to-layer height. This information is used to deposit solid pieces including a cylindrical-convex shape capable of focusing visible light. The transmittance and index homogeneity of the printed fused quartz are measured. These results show that the filament-fed process has the potential to print transmissive optics.

  14. Convex Relaxation of OPF in Multiphase Radial Networks with Wye and Delta Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Changhong; Dall-Anese, Emiliano; Low, Steven

    2017-08-01

    This panel presentation focuses on multiphase radial distribution networks with wye and delta connections, and proposes a semidefinite relaxation of the AC optimal power flow (OPF) problem. Two multiphase power flow models are developed to facilitate the integration of delta-connected loads or generation resources in the OPF problem. The first model is referred to as the extended branch flow model (EBFM). The second model leverages a linear relationship between phase-to-ground power injections and delta connections that holds under a balanced voltage approximation (BVA). Based on these models, pertinent OPF problems are formulated and relaxed to semidefinite programs (SDPs). Numerical studiesmore » on IEEE test feeders show that the proposed SDP relaxations can be solved efficiently by a generic optimization solver. Numerical evidence also indicates that solving the resultant SDP under BVA is faster than under EBFM. Moreover, both SDP solutions are numerically exact with respect to voltages and branch flows. It is further shown that the SDP solution under BVA has a small optimality gap, and the BVA model is accurate in the sense that it reproduces actual system voltages.« less

  15. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  16. Detecting glaucomatous change in visual fields: Analysis with an optimization framework.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2015-12-01

    Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Corrosion characteristics of Ni-base superalloys in high temperature steam with and without hydrogen

    NASA Astrophysics Data System (ADS)

    Kim, Donghoon; Kim, Daejong; Lee, Ho Jung; Jang, Changheui; Yoon, Duk Joo

    2013-10-01

    The hot steam corrosion behavior of Alloy 617 and Haynes 230 were evaluated in corrosion tests performed at 900 °C in steam and steam + 20 vol.% H2 environments. Corrosion rates of Alloy 617 was faster than that of Haynes 230 at 900 °C in steam and steam + 20 vol.% H2 environments. When hydrogen was added to steam, the corrosion rate was accelerated because added hydrogen increased the concentration of Cr interstitial defects in the oxide layer. Isolated nodular MnTiO3 oxides were formed on the MnCr2O4/Cr2O3 oxide layer and sub-layer Cr2O3 was formed in steam and steam + 20 vol.% H2 for Alloy 617. On the other hand, a MnCr2O4 layer was formed on top of the Cr2O3 oxide layer for Haynes 230. The extensive sub-layer Cr2O3 formation resulted from the oxygen or hydroxide inward diffusion in such environments. When hydrogen was added, the initial surface oxide morphology was changed from a convex shape to platelets because of the accelerated diffusion of cations under the oxide layer.

  18. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  19. Comments on "optimal erasure protection for scalably compressed video streams with limited retransmission".

    PubMed

    Dumitrescu, Sorina

    2009-01-01

    In order to prove a key result for their development (Lemma 2), Taubman and Thie need the assumption that the upper boundary of the convex hull of the channel coding probability-redundancy characteristic is sufficiently dense. Since a floor value for the density level for which the claim to hold is not specified, it is not clear whether their lemma applies to practical situations. In this correspondence, we show that the constraint of sufficient density can be removed, and, thus, we validate the conclusion of the lemma for any scenario encountered in practice.

  20. Offner stretcher aberrations revisited to compensate material dispersion

    NASA Astrophysics Data System (ADS)

    Vyhlídka, Štěpán; Kramer, Daniel; Meadows, Alexander; Rus, Bedřich

    2018-05-01

    We present simple analytical formulae for the calculation of the spectral phase and residual angular dispersion of an ultrashort pulse propagating through the Offner stretcher. Based on these formulae, we show that the radii of curvature of both convex and concave mirrors in the Offner triplet can be adapted to tune the fourth order dispersion term of the spectral phase of the pulse. As an example, a single-grating Offner stretcher design suitable for the suppression of material dispersion in the Ti:Sa PALS laser system is proposed. The results obtained by numerical raytracing well match those calculated from the analytical formulae.

  1. Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Scaglione, Anna

    2013-11-01

    The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.

  2. GPC: General Polygon Clipper library

    NASA Astrophysics Data System (ADS)

    Murta, Alan

    2015-12-01

    The University of Manchester GPC library is a flexible and highly robust polygon set operations library for use with C, C#, Delphi, Java, Perl, Python, Haskell, Lua, VB.Net and other applications. It supports difference, intersection, exclusive-or and union clip operations, and polygons may be comprised of multiple disjoint contours. Contour vertices may be given in any order - clockwise or anticlockwise, and contours may be convex, concave or self-intersecting, and may be nested (i.e. polygons may have holes). Output may take the form of either polygon contours or tristrips, and hole and external contours are differentiated in the result.

  3. A Polynomial-Based Nonlinear Least Squares Optimized Preconditioner for Continuous and Discontinuous Element-Based Discretizations of the Euler Equations

    DTIC Science & Technology

    2014-01-01

    system (here using left- preconditioning ) (KÃ)x = Kb̃, (3.1) where K is a low-order polynomial in à given by K = s(Ã) = m∑ i=0 kià i, (3.2) and has a... system with a complex spectrum, region E in the complex plane must be some convex form (e.g., an ellipse or polygon) that approximately encloses the...preconditioners with p = 2 and p = 20 on the spectrum of the preconditioned system matrices Kà and KH̃ for both CG Schur-complement form and DG form cases

  4. First derivatives of flow quantities behind two-dimensional, nonuniform supersonic flow over a convex corner. Ph.D. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1985-01-01

    A method of determining spatial derivatives of flow quantities behind an expansion fan as a function of the curvature of the streamline behind the fan is developed. Taylor series expansions of flow quantities within the fan are used and boundary conditions satisfied to the first and second order so that the curvature of the characteristics in the fan may be determined. A system of linear equations for the spatial derivatives is then developed. An application of the method to shock coalescence including asymmetric effects is described.

  5. Variational Quantum Tomography with Incomplete Information by Means of Semidefinite Programs

    NASA Astrophysics Data System (ADS)

    Maciel, Thiago O.; Cesário, André T.; Vianna, Reinaldo O.

    We introduce a new method to reconstruct unknown quantum states out of incomplete and noisy information. The method is a linear convex optimization problem, therefore with a unique minimum, which can be efficiently solved with Semidefinite Programs. Numerical simulations indicate that the estimated state does not overestimate purity, and neither the expectation value of optimal entanglement witnesses. The convergence properties of the method are similar to compressed sensing approaches, in the sense that, in order to reconstruct low rank states, it needs just a fraction of the effort corresponding to an informationally complete measurement.

  6. Effects of ray profile modeling on resolution recovery in clinical CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, Christian; Knaup, Michael; Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz-heidelberg.de

    2014-02-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. However, among vendors and researchers, there is no consensus of how to best achieve these goals. The authors are focusing on the aspect of geometric ray profile modeling, which is realized by some algorithms, while others model the ray as a straight line. The authors incorporate ray-modeling (RM) in nonregularized iterative reconstruction. That means, instead of using one simple single needle beam to represent the x-ray, the authors evaluatemore » the double integral of attenuation path length over the finite source distribution and the finite detector element size in the numerical forward projection. Our investigations aim at analyzing the resolution recovery (RR) effects of RM. Resolution recovery means that frequencies can be recovered beyond the resolution limit of the imaging system. In order to evaluate, whether clinical CT images can benefit from modeling the geometrical properties of each x-ray, the authors performed a 2D simulation study of a clinical CT fan-beam geometry that includes the precise modeling of these geometrical properties. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and a Forbild thorax phantom with circular resolution patterns representing calcifications in the heart region are simulated. An FBP reconstruction with a Ram–Lak kernel is used as a reference reconstruction. The FBP is compared to iterative reconstruction techniques with and without RM: An ordered subsets convex (OSC) algorithm without any RM (OSC), an OSC where the forward projection is modeled concerning the finite focal spot and detector size (OSC-RM) and an OSC with RM and with a matched forward and backprojection pair (OSC-T-RM, T for transpose). In all cases, noise was matched to be able to focus on comparing spatial resolution. The authors use two different simulation settings. Both are based on the geometry of a typical clinical CT system (0.7 mm detector element size at isocenter, 1024 projections per rotation). Setting one has an exaggerated source width of 5.0 mm. Setting two has a realistically small source width of 0.5 mm. The authors also investigate the transition from setting one to two. To quantify image quality, the authors analyze line profiles through the resolution patterns to define a contrast factor (CF) for contrast-resolution plots, and the authors compare the normalized cross-correlation (NCC) with respect to the ground truth of the circular resolution patterns. To independently analyze whether RM is of advantage, the authors implemented several iterative reconstruction algorithms: The statistical iterative reconstruction algorithm OSC, the ordered subsets simultaneous algebraic reconstruction technique (OSSART) and another statistical iterative reconstruction algorithm, denoted with ordered subsets maximum likelihood (OSML) algorithm. All algorithms were implemented both without RM (denoted as OSC, OSSART, and OSML) and with RM (denoted as OSC-RM, OSSART-RM, and OSML-RM). Results: For the unrealistic case of a 5.0 mm focal spot the CF can be improved by a factor of two due to RM: the 4.2 LP/cm bar pattern, which is the first bar pattern that cannot be resolved without RM, can be easily resolved with RM. For the realistic case of a 0.5 mm focus, all results show approximately the same CF. The NCC shows no significant dependency on RM when the source width is smaller than 2.0 mm (as in clinical CT). From 2.0 mm to 5.0 mm focal spot size increasing improvements can be observed with RM. Conclusions: Geometric RM in iterative reconstruction helps improving spatial resolution, if the ray cross-section is significantly larger than the ray sampling distance. In clinical CT, however, the ray is not much thicker than the distance between neighboring ray centers, as the focal spot size is small and detector crosstalk is negligible, due to reflective coatings between detector elements. Therefore,RM appears not to be necessary in clinical CT to achieve resolution recovery.« less

  7. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  8. Efficacy and safety of flavocoxid compared with naproxen in subjects with osteoarthritis of the knee- a subset analysis.

    PubMed

    Levy, Robert; Khokhlov, Alexander; Kopenkin, Sergey; Bart, Boris; Ermolova, Tatiana; Kantemirova, Raiasa; Mazurov, Vadim; Bell, Marjorie; Caldron, Paul; Pillai, Lakshmi; Burnett, Bruce

    2010-12-01

    twice-daily flavocoxid, a cyclooxygenase and 5-lipoxygenase inhibitor with potent antioxidant activity of botanical origin, was evaluated for 12 weeks in a randomized, double-blind, active-comparator study against naproxen in 220 subjects with moderate-severe osteoarthritis (OA) of the knee. As previously reported, both groups noted a significant reduction in the signs and symptoms of OA with no detectable differences in efficacy between the groups when the entire intent-to-treat population was considered. This post-hoc analysis compares the efficacy of flavocoxid to naproxen in different subsets of patients, specifically those related to age, gender, and disease severity as reported at baseline for individual response parameters. in the original randomized, double-blind study, 220 subjects were assigned to receive either flavocoxid (500 mg twice daily) or naproxen (500 mg twice daily) for 12 weeks. In this subgroup analysis, primary outcome measures including the Western Ontario and McMaster Universities OA index and subscales, timed walk, and secondary efficacy variables, including investigator global assessment for disease and global response to treatment, subject visual analog scale for discomfort, overall disease activity, global response to treatment, index joint tenderness and mobility, were evaluated for differing trends between the study groups. subset analyses revealed some statistically significant differences and some notable trends in favor of the flavocoxid group. These trends became stronger the longer the subjects continued on therapy. These observations were specifically noted in older subjects (>60 years), males and in subjects with milder disease, particularly those with lower subject global assessment of disease activity and investigator global assessment for disease and faster walking times at baseline. initial analysis of the entire intent-to-treat population revealed that flavocoxid was as effective as naproxen in managing the signs and symptoms of OA of the knee. Detailed analyses of subject subsets demonstrated distinct trends in favor of flavocoxid for specific groups of subjects.

  9. Integer Partitions and Convexity

    NASA Astrophysics Data System (ADS)

    Bouroubi, Sadek

    2007-06-01

    Let n be an integer >=1, and let p(n,k) and P(n,k) count the number of partitions of n into k parts, and the number of partitions of n into parts less than or equal to k, respectively. In this paper, we show that these functions are convex. The result includes the actual value of the constant of Bateman and Erdos.

  10. Testing and inspecting lens by holographic means

    DOEpatents

    Hildebrand, Bernard P.

    1976-01-01

    Processes for the accurate, rapid and inexpensive testing and inspecting of oncave and convex lens surfaces through holographic means requiring no beamsplitters, mirrors or overpower optics, and wherein a hologram formed in accordance with one aspect of the invention contains the entire interferometer and serves as both a master and illuminating source for both concave and said convex surfaces to be so tested.

  11. Some Tours Are More Equal than Others: The Convex-Hull Model Revisited with Lessons for Testing Models of the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    Tak, Susanne; Plaisier, Marco; van Rooij, Iris

    2008-01-01

    To explain human performance on the "Traveling Salesperson" problem (TSP), MacGregor, Ormerod, and Chronicle (2000) proposed that humans construct solutions according to the steps described by their convex-hull algorithm. Focusing on tour length as the dependent variable, and using only random or semirandom point sets, the authors…

  12. Beam aperture modifier design with acoustic metasurfaces

    NASA Astrophysics Data System (ADS)

    Tang, Weipeng; Ren, Chunyu

    2017-10-01

    In this paper, we present a design concept of acoustic beam aperture modifier using two metasurface-based planar lenses. By appropriately designing the phase gradient profile along the metasurface, we obtain a class of acoustic convex lenses and concave lenses, which can focus the incoming plane waves and collimate the converging waves, respectively. On the basis of the high converging and diverging capability of these lenses, two kinds of lens combination scheme, including the convex-concave type and convex-convex type, are proposed to tune up the incoming beam aperture as needed. To be specific, the aperture of the acoustic beam can be shrunk or expanded through adjusting the phase gradient of the pair of lenses and the spacing between them. These lenses and the corresponding aperture modifiers are constructed by the stacking ultrathin labyrinthine structures, which are obtained by the geometry optimization procedure and exhibit high transmission coefficient and a full range of phase shift. The simulation results demonstrate the effectiveness of our proposed beam aperture modifiers. Due to the flexibility in aperture controlling and the simplicity in fabrication, the proposed modifiers have promising potential in applications, such as acoustic imaging, nondestructive evaluation, and communication.

  13. Improving the growth of CZT crystals for radiation detectors: a modeling perspective

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Zhang, Nan; Yeckel, Andrew

    2012-10-01

    The availability of large, single crystals of cadmium zinc telluride (CZT) with uniform properties is key to improving the performance of gamma radiation detectors fabricated from them. Towards this goal, we discuss results obtained by computational models that provide a deeper understanding of crystal growth processes and how the growth of CZT can be improved. In particular, we discuss methods that may be implemented to lessen the deleterious interactions between the ampoule wall and the growing crystal via engineering a convex solidification interface. For vertical Bridgman growth, a novel, bell-curve furnace temperature profile is predicted to achieve macroscopically convex solid-liquid interface shapes during melt growth of CZT in a multiple-zone furnace. This approach represents a significant advance over traditional gradient-freeze profiles, which always yield concave interface shapes, and static heat transfer designs, such as pedestal design, that achieve convex interfaces over only a small portion of the growth run. Importantly, this strategy may be applied to any Bridgman configuration that utilizes multiple, controllable heating zones. Realizing a convex solidification interface via this adaptive bell-curve furnace profile is postulated to result in better crystallinity and higher yields than conventional CZT growth techniques.

  14. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  15. Multiple cues add up in defining a figure on a ground.

    PubMed

    Devinck, Frédéric; Spillmann, Lothar

    2013-01-25

    We studied the contribution of multiple cues to figure-ground segregation. Convexity, symmetry, and top-down polarity (henceforth called wide base) were used as cues. Single-cue displays as well as ambiguous stimulus patterns containing two or three cues were presented. Error rate (defined by responses to uncued stimuli) and reaction time were used to quantify the figural strength of a given cue. In the first experiment, observers were asked to report which of two regions, left or right, appeared as foreground figure. Error rate did not benefit from adding additional cues if convexity was present, suggesting that responses were based on convexity as the predominant figural determinant. However, reaction time became shorter with additional cues even if convexity was present. For example, when symmetry and wide base were added, figure-ground segregation was facilitated. In a second experiment, stimulus patterns were exposed for 150ms to rule out eye movements. Results were similar to those found in the first experiment. Both experiments suggest that under the conditions of our experiment figure-ground segregation is perceived more readily, when several cues cooperate in defining the figure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Experimental Study of Reciprocating Friction between Rape Stalk and Bionic Nonsmooth Surface Units

    PubMed Central

    Ma, Zheng; Li, Yaoming; Xu, Lizhang

    2015-01-01

    Background. China is the largest producer of rape oilseed in the world; however, the mechanization level of rape harvest is relatively low, because rape materials easily adhere to the cleaning screens of combine harvesters, resulting in significant cleaning losses. Previous studies have shown that bionic nonsmooth surface cleaning screens restrain the adhesion of rape materials, but the underlying mechanisms remain unclear. Objective. The reciprocating friction between rape stalk and bionic nonsmooth metal surface was examined. Methods. The short-time Fourier transform method was used to discriminate the stable phase of friction signals and the stick-lag distance was defined to analyze the stable reciprocating friction in a phase diagram. Results. The reciprocating friction between rape stalk and metal surface is a typical stick-slip friction, and the bionic nonsmooth metal surfaces with concave or convex units reduced friction force with increasing reciprocating frequency. The results also showed that the stick-lag distance of convex surface increased with reciprocating frequency, which indicated that convex surface reduces friction force more efficiently. Conclusions. We suggest that bionic nonsmooth surface cleaning screens, especially with convex units, restrain the adhesion of rape materials more efficiently compared to the smooth surface cleaning screens. PMID:27034611

  17. Ultrasound-guided Subclavian Vein Cannulation Using a Micro-Convex Ultrasound Probe

    PubMed Central

    Fair, James; Hirshberg, Eliotte L.; Grissom, Colin K.; Brown, Samuel M.

    2014-01-01

    Background: The subclavian vein is the preferred site for central venous catheter placement due to infection risk and patient comfort. Ultrasound guidance is useful in cannulation of other veins, but for the subclavian vein, current ultrasound-guided techniques using high-frequency linear array probes are generally limited to axillary vein cannulation. Methods: We report a series of patients who underwent clinically indicated subclavian venous catheter placement using a micro-convex pediatric probe for real-time guidance in the vein’s longitudinal axis. We identified rates of successful placement and complications by chart review. Results: Twenty-four catheters were placed using the micro-convex pediatric probe with confirmation of placement of the needle medial to the lateral border of the first rib. Sixteen of the catheters were placed by trainee physicians. In 23 patients, the catheter was placed without complication (hematoma, pneumothorax, infection). In one patient, the vein could not be safely cannulated without risk of arterial puncture, so an alternative site was selected. Conclusions: Infraclavicular subclavian vein cannulation using real-time ultrasound with a micro-convex pediatric probe appears to be a safe and effective method of placing subclavian vascular catheters. This technique merits further study to confirm safety and efficacy. PMID:24611628

  18. Systematization, distribution and territory of the middle cerebral artery on the brain surface in chinchilla (Chinchilla lanigera).

    PubMed

    De Araujo, A C P; Campos, R

    2009-02-01

    The aim of the present study was to analyse thirty chinchilla (Chinchilla lanigera) brains, injected with latex, and to systematize and describe the distribution and the vascularization territories of the middle cerebral artery. This long vessel, after it has originated from the terminal branch of the basilar artery, formed the following collateral branches: rostral, caudal and striated (perforating) central branches. After crossing the lateral rhinal sulcus, the middle cerebral artery emitted a sequence of rostral and caudal convex hemispheric cortical collateral branches on the convex surface of the cerebral hemisphere to the frontal, parietal, temporal and occipital lobes. Among the rostral convex hemispheric branches, a trunk was observed, which reached the frontal and parietal lobes and, in a few cases, the occipital lobe. The vascular territory of the chinchilla's middle cerebral artery included, in the cerebral hemisphere basis, the lateral cerebral fossa, the caudal third of the olfactory trigone, the rostral two-thirds of the piriform lobe, the lateral olfactory tract, and most of the convex surface of the cerebral hemisphere, except for a strip between the cerebral longitudinal fissure and the vallecula, which extended from the rostral to the caudal poles bordering the cerebral transverse fissure.

  19. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  20. Dynamical tunneling versus fast diffusion for a non-convex Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pittman, S. M.; Tannenbaum, E.; Heller, E. J.

    This paper attempts to resolve the issue of the nature of the 0.01-0.1 cm{sup −1} peak splittings observed in high-resolution IR spectra of polyatomic molecules. One hypothesis is that these splittings are caused by dynamical tunneling, a quantum-mechanical phenomenon whereby energy flows between two disconnected regions of phase-space across dynamical barriers. However, a competing classical mechanism for energy flow is Arnol’d diffusion, which connects different regions of phase-space by a resonance network known as the Arnol’d web. The speed of diffusion is bounded by the Nekhoroshev theorem, which guarantees stability on exponentially long time scales if the Hamiltonian is steep.more » Here we consider a non-convex Hamiltonian that contains the characteristics of a molecular Hamiltonian, but does not satisfy the Nekhoroshev theorem. The diffusion along the Arnol’d web is expected to be fast for a non-convex Hamiltonian. While fast diffusion is an unlikely competitor for longtime energy flow in molecules, we show how dynamical tunneling dominates compared to fast diffusion in the nearly integrable regime for a non-convex Hamiltonian, as well as present a new kind of dynamical tunneling.« less

  1. Temperature actuated shutdown assembly for a nuclear reactor

    DOEpatents

    Sowa, Edmund S.

    1976-01-01

    Three identical bimetallic disks, each shaped as a spherical cap with its convex side composed of a layer of metal such as molybdenum and its concave side composed of a metal of a relatively higher coefficient of thermal expansion such as stainless steel, are retained within flanges attached to three sides of an inner hexagonal tube containing a neutron absorber to be inserted into a nuclear reactor core. Each disk holds a metal ball against its normally convex side so that the ball projects partially through a hole in the tube located concentrically with the center of each disk; at a predetermined temperature an imbalance of thermally induced stresses in at least one of the disks will cause its convex side to become concave and its concave side to become convex, thus pulling the ball from the hole in which it is located. The absorber has a conical bottom supported by the three balls and is small enough in relation to the internal dimensions of the tube to allow it to slip toward the removed ball or balls, thus clearing the unremoved balls or ball so that it will fall into the reactor core.

  2. Novel method of finding extreme edges in a convex set of N-dimension vectors

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    2001-11-01

    As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.

  3. Low-rank structure learning via nonconvex heuristic recovery.

    PubMed

    Deng, Yue; Dai, Qionghai; Liu, Risheng; Zhang, Zengke; Hu, Sanqing

    2013-03-01

    In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 < p < 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 < p < 1) for both data with higher rank and with denser corruptions.

  4. Application of the phase shifting diffraction interferometer for measuring convex mirrors and negative lenses

    DOEpatents

    Sommargren, Gary E.; Campbell, Eugene W.

    2004-03-09

    To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second, measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.

  5. Application Of The Phase Shifting Diffraction Interferometer For Measuring Convex Mirrors And Negative Lenses

    DOEpatents

    Sommargren, Gary E.; Campbell, Eugene W.

    2005-06-21

    To measure a convex mirror, a reference beam and a measurement beam are both provided through a single optical fiber. A positive auxiliary lens is placed in the system to give a converging wavefront onto the convex mirror under test. A measurement is taken that includes the aberrations of the convex mirror as well as the errors due to two transmissions through the positive auxiliary lens. A second measurement provides the information to eliminate this error. A negative lens can also be measured in a similar way. Again, there are two measurement set-ups. A reference beam is provided from a first optical fiber and a measurement beam is provided from a second optical fiber. A positive auxiliary lens is placed in the system to provide a converging wavefront from the reference beam onto the negative lens under test. The measurement beam is combined with the reference wavefront and is analyzed by standard methods. This measurement includes the aberrations of the negative lens, as well as the errors due to a single transmission through the positive auxiliary lens. A second measurement provides the information to eliminate this error.

  6. Traits and evolution of wing venation pattern in paraneopteran insects.

    PubMed

    Nel, André; Prokop, Jakub; Nel, Patricia; Grandcolas, Philippe; Huang, Di-Ying; Roques, Patrick; Guilbert, Eric; Dostál, Ondřej; Szwedo, Jacek

    2012-05-01

    Two different patterns of wing venation are currently supposed to be present in each of the three orders of Paraneoptera. This is unlikely compared with the situation in other insects where only one pattern exists per order. We propose for all Paraneoptera a new and unique interpretation of wing venation pattern, assuming that the convex cubitus anterior gets fused with the common stem of median and radial veins at or very near to wing base, after separation from concave cubitus posterior, and re-emerges more distally from R + M stem. Thereafter, the vein between concave cubitus posterior and CuA is a specialized crossvein called "cua-cup," proximally concave and distally convex. We show that despite some variations, that is, cua-cup can vary from absent to hypertrophic; CuA can re-emerge together with M or not, or even completely disappear, this new interpretation explains all situations among all fossil and recent paraneopteran lineages. We propose that the characters "CuA fused in a common stem with R and M"and "presence of specialized crossvein cua-cup" are venation apomorphies that support the monophyly of the Paraneoptera. In the light of these characters, we reinterpret several Palaeozoic and early Mesozoic fossils that were ascribed to Paraneoptera, and confirm the attribution of several to this superorder as well as possible attribution of Zygopsocidae (Zygopsocus permianus Tillyard, 1935) as oldest Psocodea. We discuss the situation in extinct Hypoperlida and Miomoptera, suggesting that both orders could well be polyphyletic, with taxa related to Archaeorthoptera, Paraneoptera, or even Holometabola. The Carboniferous Protoprosbolidae is resurrected and retransferred into the Paraneoptera. The genus Lithoscytina is restored. The miomopteran Eodelopterum priscum Schmidt, 1962 is newly revised and considered as a fern pinnule. In addition, the new paraneopteran Bruayaphis oudardi gen. nov. et sp. nov. is described fromthe Upper Carboniferous of France (see Supporting Information). Copyright © 2011 Wiley Periodicals, Inc.

  7. A theoretical stochastic control framework for adapting radiotherapy to hypoxia

    NASA Astrophysics Data System (ADS)

    Saberian, Fatemeh; Ghate, Archis; Kim, Minsun

    2016-10-01

    Hypoxia, that is, insufficient oxygen partial pressure, is a known cause of reduced radiosensitivity in solid tumors, and especially in head-and-neck tumors. It is thus believed to adversely affect the outcome of fractionated radiotherapy. Oxygen partial pressure varies spatially and temporally over the treatment course and exhibits inter-patient and intra-tumor variation. Emerging advances in non-invasive functional imaging offer the future possibility of adapting radiotherapy plans to this uncertain spatiotemporal evolution of hypoxia over the treatment course. We study the potential benefits of such adaptive planning via a theoretical stochastic control framework using computer-simulated evolution of hypoxia on computer-generated test cases in head-and-neck cancer. The exact solution of the resulting control problem is computationally intractable. We develop an approximation algorithm, called certainty equivalent control, that calls for the solution of a sequence of convex programs over the treatment course; dose-volume constraints are handled using a simple constraint generation method. These convex programs are solved using an interior point algorithm with a logarithmic barrier via Newton’s method and backtracking line search. Convexity of various formulations in this paper is guaranteed by a sufficient condition on radiobiological tumor-response parameters. This condition is expected to hold for head-and-neck tumors and for other similarly responding tumors where the linear dose-response parameter is larger than the quadratic dose-response parameter. We perform numerical experiments on four test cases by using a first-order vector autoregressive process with exponential and rational-quadratic covariance functions from the spatiotemporal statistics literature to simulate the evolution of hypoxia. Our results suggest that dynamic planning could lead to a considerable improvement in the number of tumor cells remaining at the end of the treatment course. Through these simulations, we also gain insights into when and why dynamic planning is likely to yield the largest benefits.

  8. Preliminary investigations into macroscopic attenuated total reflection-fourier transform infrared imaging of intact spherical domains: spatial resolution and image distortion.

    PubMed

    Everall, Neil J; Priestnall, Ian M; Clarke, Fiona; Jayes, Linda; Poulter, Graham; Coombs, David; George, Michael W

    2009-03-01

    This paper describes preliminary investigations into the spatial resolution of macro attenuated total reflection (ATR) Fourier transform infrared (FT-IR) imaging and the distortions that arise when imaging intact, convex domains, using spheres as an extreme example. The competing effects of shallow evanescent wave penetration and blurring due to finite spatial resolution meant that spheres within the range 20-140 microm all appeared to be approximately the same size ( approximately 30-35 microm) when imaged with a numerical aperture (NA) of approximately 0.2. A very simple model was developed that predicted this extreme insensitivity to particle size. On the basis of these studies, it is anticipated that ATR imaging at this NA will be insensitive to the size of intact highly convex objects. A higher numerical aperture device should give a better estimate of the size of small spheres, owing to superior spatial resolution, but large spheres should still appear undersized due to the shallow sampling depth. An estimate of the point spread function (PSF) was required in order to develop and apply the model. The PSF was measured by imaging a sharp interface; assuming an Airy profile, the PSF width (distance from central maximum to first minimum) was estimated to be approximately 20 and 30 microm for IR bands at 1600 and 1000 cm(-1), respectively. This work has two significant limitations. First, underestimation of domain size only arises when imaging intact convex objects; if surfaces are prepared that randomly and representatively section through domains, the images can be analyzed to calculate parameters such as domain size, area, and volume. Second, the model ignores reflection and refraction and assumes weak absorption; hence, the predicted intensity profiles are not expected to be accurate; they merely give a rough estimate of the apparent sphere size. Much further work is required to place the field of quantitative ATR-FT-IR imaging on a sound basis.

  9. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  10. Tumor segmentation of multi-echo MR T2-weighted images with morphological operators

    NASA Astrophysics Data System (ADS)

    Torres, W.; Martín-Landrove, M.; Paluszny, M.; Figueroa, G.; Padilla, G.

    2009-02-01

    In the present work an automatic brain tumor segmentation procedure based on mathematical morphology is proposed. The approach considers sequences of eight multi-echo MR T2-weighted images. The relaxation time T2 characterizes the relaxation of water protons in the brain tissue: white matter, gray matter, cerebrospinal fluid (CSF) or pathological tissue. Image data is initially regularized by the application of a log-convex filter in order to adjust its geometrical properties to those of noiseless data, which exhibits monotonously decreasing convex behavior. Finally the regularized data is analyzed by means of an 8-dimensional morphological eccentricity filter. In a first stage, the filter was used for the spatial homogenization of the tissues in the image, replacing each pixel by the most representative pixel within its structuring element, i.e. the one which exhibits the minimum total distance to all members in the structuring element. On the filtered images, the relaxation time T2 is estimated by means of least square regression algorithm and the histogram of T2 is determined. The T2 histogram was partitioned using the watershed morphological operator; relaxation time classes were established and used for tissue classification and segmentation of the image. The method was validated on 15 sets of MRI data with excellent results.

  11. Fréchet derivative with respect to the shape of a strongly convex nonscattering region in optical tomography

    NASA Astrophysics Data System (ADS)

    Hyvönen, Nuutti

    2007-10-01

    The aim of optical tomography is to reconstruct the optical properties inside a physical body, e.g. a neonatal head, by illuminating it with near-infrared light and measuring the outward flux of photons on the object boundary. Because a brain consists of strongly scattering tissue with imbedded cavities filled by weakly scattering cerebrospinal fluid, propagation of near-infrared photons in the human head can be treated by combining the diffusion approximation of the radiative transfer equation with geometrical optics to obtain the radiosity-diffusion forward model of optical tomography. At the moment, a disadvantage with the radiosity-diffusion model is that the locations of the transparent cavities must be known in advance in order to be able to reconstruct the physiologically interesting quantities, i.e., the absorption and the scatter in the strongly scattering brain tissue. In this work we show that the boundary measurement map of optical tomography is Fréchet differentiable with respect to the shape of a strongly convex nonscattering region. Using this result, we introduce a numerical algorithm for approximating an unknown nonscattering cavity by a ball if the background diffuse optical properties of the object are known. The functionality of the method is demonstrated through two-dimensional numerical experiments.

  12. Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays.

    PubMed

    Rakkiyappan, R; Sakthivel, N; Cao, Jinde

    2015-06-01

    This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. On Using Homogeneous Polynomials To Design Anisotropic Yield Functions With Tension/Compression Symmetry/Assymetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soare, S.; Cazacu, O.; Yoon, J. W.

    With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stressmore » states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.« less

  14. Advanced autostereoscopic display for G-7 pilot project

    NASA Astrophysics Data System (ADS)

    Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi

    1999-05-01

    An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.

  15. The effect of optical system design for laser micro-hole drilling process

    NASA Astrophysics Data System (ADS)

    Ding, Chien-Fang; Lan, Yin-Te; Chien, Yu-Lun; Young, Hong-Tsu

    2017-08-01

    Lasers are a promising high accuracy tool to make small holes in composite or hard material. They offer advantages over the conventional machining process, which is time consuming and has scaling limitations. However, the major downfall in laser material processing is the relatively large heat affect zone or number of molten burrs it generates, even when using nanosecond lasers over high-cost ultrafast lasers. In this paper, we constructed a nanosecond laser processing system with a 532 nm wavelength laser source. In order to enhance precision and minimize the effect of heat generation with the laser drilling process, we investigated the geometric shape of optical elements and analyzed the images using the modulation transfer function (MTF) and encircled energy (EE) by using optical software Zemax. We discuss commercial spherical lenses, including plano-convex lenses, bi-convex lenses, plano-concave lenses, bi-concave lenses, best-form lenses, and meniscus lenses. Furthermore, we determined the best lens configuration by image evaluation, and then verified the results experimentally by carrying out the laser drilling process on multilayer flexible copper clad laminate (FCCL). The paper presents the drilling results obtained with different lens configurations and found the best configuration had a small heat affect zone and a clean edge along laser-drilled holes.

  16. Extremal edges: a powerful cue to depth perception and figure-ground organization.

    PubMed

    Palmer, Stephen E; Ghose, Tandra

    2008-01-01

    Extremal edges (EEs) are projections of viewpoint-specific horizons of self-occlusion on smooth convex surfaces. An ecological analysis of viewpoint constraints suggests that an EE surface is likely to be closer to the observer than the non-EE surface on the other side of the edge. In two experiments, one using shading gradients and the other using texture gradients, we demonstrated that EEs operate as strong cues to relative depth perception and figure-ground organization. Image regions with an EE along the shared border were overwhelmingly perceived as closer than either flat or equally convex surfaces without an EE along that border. A further demonstration suggests that EEs are more powerful than classical figure-ground cues, including even the joint effects of small size, convexity, and surroundedness.

  17. Convex lattice polygons of fixed area with perimeter-dependent weights.

    PubMed

    Rajesh, R; Dhar, Deepak

    2005-01-01

    We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.

  18. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  19. A free boundary approach to the Rosensweig instability of ferrofluids

    NASA Astrophysics Data System (ADS)

    Parini, Enea; Stylianou, Athanasios

    2018-04-01

    We establish the existence of saddle points for a free boundary problem describing the two-dimensional free surface of a ferrofluid undergoing normal field instability. The starting point is the ferrohydrostatic equations for the magnetic potentials in the ferrofluid and air, and the function describing their interface. These constitute the strong form for the Euler-Lagrange equations of a convex-concave functional, which we extend to include interfaces that are not necessarily graphs of functions. Saddle points are then found by iterating the direct method of the calculus of variations and applying classical results of convex analysis. For the existence part, we assume a general nonlinear magnetization law; for a linear law, we also show, via convex duality, that the saddle point is a constrained minimizer of the relevant energy functional.

  20. APOE, MAPT, and COMT and Parkinson's Disease Susceptibility and Cognitive Symptom Progression.

    PubMed

    Paul, Kimberly C; Rausch, Rebecca; Creek, Michelle M; Sinsheimer, Janet S; Bronstein, Jeff M; Bordelon, Yvette; Ritz, Beate

    2016-04-02

    Cognitive decline is well recognized in Parkinson's disease (PD) and a major concern for patients and caregivers. Apolipoprotein E (APOE), catechol-O-methyl transferase (COMT), and microtubule-associated protein tau (MAPT) are of interest related to their contributions to cognitive decline or dementia in PD. Here, we investigate whether APOE, COMT, or MAPT influence the rate of cognitive decline in PD patients. We relied on 634 PD patients and 879 controls to examine gene-PD susceptibility associations, and nested longitudinal cohort of 246 patients from the case-control study, which followed patients on average 5 years and 7.5 years into disease. We repeatedly assessed cognitive symptom progression with the MMSE and conducted a full neuropsychological battery on a subset of 183 cognitively normal patients. We used repeated-measures regression analyses to assess longitudinal associations between genotypes and cognitive progression scores. The MAPT H1 haplotype was associated with PD susceptibility. APOE 4 carriers (ɛ4+) (p = 0.03) and possibly COMT Met/Met (p = 0.06) carriers exhibited faster annual decline on the MMSE. Additionally, APOEɛ4+ carriers showed faster decline in many of the neuropsychological test scores. No such differences in neuropsychological outcomes were seen for the COMT genotypes. This work supports a growing set of research identifying overlapping etiology and pathology between synucleinopathies, such as PD, Alzheimer's disease, and tauopathies, especially in the context of cognitive dysfunction in PD. We provide support for the argument that APOE ɛ4+ and COMT Met/Met genotypes can be used as predictors of faster cognitive decline in PD.

Top