Sample records for minimum convex polygon

  1. Convex Lattice Polygons

    ERIC Educational Resources Information Center

    Scott, Paul

    2006-01-01

    A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.

  2. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  3. Effects of GPS sampling intensity on home range analyses

    Treesearch

    Jeffrey J. Kolodzinski; Lawrence V. Tannenbaum; David A. Osborn; Mark C. Conner; W. Mark Ford; Karl V. Miller

    2010-01-01

    The two most common methods for determining home ranges, minimum convex polygon (MCP) and kernel analyses, can be affected by sampling intensity. Despite prior research, it remains unclear how high-intensity sampling regimes affect home range estimations. We used datasets from 14 GPS-collared, white-tailed deer (Odocoileus virginianus) to describe...

  4. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  5. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  6. Home-range and activity pattern of rehabilitated malayan sun bears (Helarctos malayanus) in the Tembat Forest Reserve, Terengganu

    NASA Astrophysics Data System (ADS)

    Abidin, Mohammad Kamaruddin Zainal; Mohammed, Ahmad Azhar; Nor, Shukor Md

    2018-04-01

    Re-introduction programme has been adopted in solving the conflict issues related with the Malayan sun bears in Peninsular Malaysia. Two rehabilitated sun bears (#1533 and #1532) were collared and released in Tembat Forest Reserve, Hulu Terengganu to study the home-range and activity pattern. Tracking of sun bear in wild have be conducted manually by using telemetry devices namely radio frequency systems and GPS-UHF download system. A total of 912 locations were recorded. The home range size (indicate by the size of convex polygon) of bear #1533 is larger than bear #1532, with value of 95% minimum convex polygon was 130 km2 compared to its counterpart was 33.28 km2. Bears moved to forest (primary and secondary) and oil palm area. Bear #1533 and #1532 were more active in daytime (diurnal) especially from sunrise to midday. Activity pattern of both rehabilitated bears suggested influence by their daily activity in captivity. This study has proposed two guidelines in re-introduction, 1) minimum distance between release site and possible conflict area is 10-13 km and 2) release during the bear's active time.

  7. Autumn migration and wintering areas of Peregrine Falcons Falco peregrinus nesting on the Kola Peninsula, northern Russia

    USGS Publications Warehouse

    Ganusevich, S.A.; Maechtle, T.L.; Seegar, W.S.; Yates, M.A.; McGrady, M.J.; Fuller, M.; Schueck, L.; Dayton, J.; Henny, C.J.

    2004-01-01

    Four female Peregrine Falcons Falco peregrinus breeding on the Kola Peninsula, Russia, were fitted with satellite-received transmitters in 1994. Their breeding home ranges averaged 1175 (sd = ±714) km2, and overlapped considerably. All left their breeding grounds in September and migrated generally south-west along the Baltic Sea. The mean travel rate for three falcons was 190 km/day. Two Falcons wintered on the coasts of France and in southern Spain, which were, respectively, 2909 and 4262 km from their breeding sites. Data on migration routes suggested that Falcons took a near-direct route to the wintering areas. No prolonged stopovers were apparent. The 90% minimum convex polygon winter range of a bird that migrated to Spain encompassed 213 km2 (n = 54). The area of the 50% minimum convex polygon was 21.5 km2 (n = 29). Data from this study agree with others from North America that show that Falcons breeding in a single area do not necessarily follow the same migratory path southward and do not necessarily use the same wintering grounds.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  9. Convex lattice polygons of fixed area with perimeter-dependent weights.

    PubMed

    Rajesh, R; Dhar, Deepak

    2005-01-01

    We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.

  10. Spotted owl home range and habitat use in the southern Oregon Coast Range.

    Treesearch

    A.B. Carey; J.A. Reid; S.P. Horton

    1991-01-01

    We radiotracked 9 adult spotted owls (Strix occidentalis) in the southern Oregon Coast Ranges for 6-12 months. Owls selected home ranges that emphasized old growth within the landscape. Minimum convex polygon home ranges of 4 pairs were 1,153-3,945 ha and contained 726-1,062 ha of old growth. The percentages of. the home ranges in old growth were...

  11. A new convexity measure for polygons.

    PubMed

    Zunic, Jovisa; Rosin, Paul L

    2004-07-01

    Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.

  12. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2014-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n -gon, our construction produces 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n ( n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called 'serendipity' elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed.

  13. The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons

    NASA Astrophysics Data System (ADS)

    Kweon, Jae Ryong

    2017-03-01

    In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.

  14. QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES

    PubMed Central

    RAND, ALEXANDER; GILLETTE, ANDREW; BAJAJ, CHANDRAJIT

    2013-01-01

    We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n-gon, our construction produces 2n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n(n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called ‘serendipity’ elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed. PMID:25301974

  15. Iterating the Number of Intersection Points of the Diagonals of Irregular Convex Polygons, or C (n, 4) the Hard Way!

    ERIC Educational Resources Information Center

    Hathout, Leith

    2007-01-01

    Counting the number of internal intersection points made by the diagonals of irregular convex polygons where no three diagonals are concurrent is an interesting problem in discrete mathematics. This paper uses an iterative approach to develop a summation relation which tallies the total number of intersections, and shows that this total can be…

  16. Detection of Convexity and Concavity in Context

    ERIC Educational Resources Information Center

    Bertamini, Marco

    2008-01-01

    Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…

  17. Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.

    PubMed

    Xu, J

    2001-01-01

    In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.

  18. Method and apparatus for modeling interactions

    DOEpatents

    Xavier, Patrick G.

    2000-08-08

    A method and apparatus for modeling interactions between bodies. The method comprises representing two bodies undergoing translations and rotations by two hierarchical swept volume representations. Interactions such as nearest approach and collision can be modeled based on the swept body representations. The present invention can serve as a practical tool in motion planning, CAD systems, simulation systems, safety analysis, and applications that require modeling time-based interactions. A body can be represented in the present invention by a union of convex polygons and convex polyhedra. As used generally herein, polyhedron includes polygon, and polyhedra includes polygons. The body undergoing translation can be represented by a swept body representation, where the swept body representation comprises a hierarchical bounding volume representation whose leaves each contain a representation of the region swept by a section of the body during the translation, and where the union of the regions is a superset of the region swept by the surface of the body during translation. Interactions between two bodies thus represented can be modeled by modeling interactions between the convex hulls of the finite sets of discrete points in the swept body representations.

  19. GPC: General Polygon Clipper library

    NASA Astrophysics Data System (ADS)

    Murta, Alan

    2015-12-01

    The University of Manchester GPC library is a flexible and highly robust polygon set operations library for use with C, C#, Delphi, Java, Perl, Python, Haskell, Lua, VB.Net and other applications. It supports difference, intersection, exclusive-or and union clip operations, and polygons may be comprised of multiple disjoint contours. Contour vertices may be given in any order - clockwise or anticlockwise, and contours may be convex, concave or self-intersecting, and may be nested (i.e. polygons may have holes). Output may take the form of either polygon contours or tristrips, and hole and external contours are differentiated in the result.

  20. CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.

    PubMed

    Mei, Gang

    2016-01-01

    This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.

  1. Cholesteric microlenses and micromirrors in the beetle cuticle and in synthetic oligomer films: a comparative study

    NASA Astrophysics Data System (ADS)

    Agez, Gonzague; Bayon, Chloé; Mitov, Michel

    2017-02-01

    The polygonal texture in cholesteric liquid crystals consist in an array of contiguous polygonal cells. The optical response and the structure of polygonal texture are investigated in the cuticle of beetle Chrysina gloriosa and in synthetic oligomer films. In the insect carapace, the polygons are concave and behave as spherical micro-mirrors whereas they are convex and behave as diverging microlenses in synthetic films. The characteristics of light focusing (spot, donut or continuum background) are highly tunable with the wavelength and the polarization of the incident light.

  2. On Viviani's Theorem and Its Extensions

    ERIC Educational Resources Information Center

    Abboud, Elias

    2010-01-01

    Viviani's theorem states that the sum of distances from any point inside an equilateral triangle to its sides is constant. Here, in an extension of this result, we show, using linear programming, that any convex polygon can be divided into parallel line segments on which the sum of the distances to the sides of the polygon is constant. Let us say…

  3. Do Power Lines and Protected Areas Present a Catch-22 Situation for Cape Vultures (Gyps coprotheres)?

    PubMed Central

    Phipps, W. Louis; Wolter, Kerri; Michael, Michael D.; MacTavish, Lynne M.; Yarnell, Richard W.

    2013-01-01

    Cape vulture Gyps coprotheres populations have declined across their range due to multiple anthropogenic threats. Their susceptibility to fatal collisions with the expanding power line network and the prevalence of carcasses contaminated with illegal poisons and other threats outside protected areas are thought to be the primary drivers of declines in southern Africa. We used GPS-GSM units to track the movements and delineate the home ranges of five adult (mean ±SD minimum convex polygon area  =  121,655±90,845 km2) and four immature (mean ±SD minimum convex polygon area  =  492,300±259,427 km2) Cape vultures to investigate the influence of power lines and their use of protected areas. The vultures travelled more than 1,000 km from the capture site and collectively entered five different countries in southern Africa. Their movement patterns and core foraging ranges were closely associated with the spatial distribution of transmission power lines and we present evidence that the construction of power lines has allowed the species to extend its range to areas previously devoid of suitable perches. The distribution of locations of known Cape vulture mortalities caused by interactions with power lines corresponded to the core ranges of the tracked vultures. Although some of the vultures regularly roosted at breeding colonies located inside protected areas the majority of foraging activity took place on unprotected farmland. Their ability to travel vast distances very quickly and the high proportion of time they spend in the vicinity of power lines and outside protected areas make Cape vultures especially vulnerable to negative interactions with the expanding power line network and the full range of threats across the region. Co-ordinated cross-border conservation strategies beyond the protected area network will therefore be necessary to ensure the future survival of threatened vultures in Africa. PMID:24137496

  4. Planning minimum-energy paths in an off-road environment with anisotropic traversal costs and motion constraints. Doctoral thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, R.S.

    1989-06-01

    For a vehicle operating across arbitrarily-contoured terrain, finding the most fuel-efficient route between two points can be viewed as a high-level global path-planning problem with traversal costs and stability dependent on the direction of travel (anisotropic). The problem assumes a two-dimensional polygonal map of homogeneous cost regions for terrain representation constructed from elevation information. The anisotropic energy cost of vehicle motion has a non-braking component dependent on horizontal distance, a braking component dependent on vertical distance, and a constant path-independent component. The behavior of minimum-energy paths is then proved to be restricted to a small, but optimal set of traversalmore » types. An optimal-path-planning algorithm, using a heuristic search technique, reduces the infinite number of paths between the start and goal points to a finite number by generating sequences of goal-feasible window lists from analyzing the polygonal map and applying pruning criteria. The pruning criteria consist of visibility analysis, heading analysis, and region-boundary constraints. Each goal-feasible window lists specifies an associated convex optimization problem, and the best of all locally-optimal paths through the goal-feasible window lists is the globally-optimal path. These ideas have been implemented in a computer program, with results showing considerably better performance than the exponential average-case behavior predicted.« less

  5. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES

    PubMed Central

    GILLETTE, ANDREW; RAND, ALEXANDER; BAJAJ, CHANDRAJIT

    2016-01-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties. PMID:28077939

  6. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES.

    PubMed

    Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

    2016-10-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties.

  7. Scalable Metropolis Monte Carlo for simulation of hard shapes

    NASA Astrophysics Data System (ADS)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  8. Generating a Simulated Fluid Flow over a Surface Using Anisotropic Diffusion

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)

    2016-01-01

    A fluid-flow simulation over a computer-generated surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using the gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and the gradient vector.

  9. Applying Workspace Limitations in a Velocity-Controlled Robotic Mechanism

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Platt, Robert J., Jr. (Inventor)

    2014-01-01

    A robotic system includes a robotic mechanism responsive to velocity control signals, and a permissible workspace defined by a convex-polygon boundary. A host machine determines a position of a reference point on the mechanism with respect to the boundary, and includes an algorithm for enforcing the boundary by automatically shaping the velocity control signals as a function of the position, thereby providing smooth and unperturbed operation of the mechanism along the edges and corners of the boundary. The algorithm is suited for application with higher speeds and/or external forces. A host machine includes an algorithm for enforcing the boundary by shaping the velocity control signals as a function of the reference point position, and a hardware module for executing the algorithm. A method for enforcing the convex-polygon boundary is also provided that shapes a velocity control signal via a host machine as a function of the reference point position.

  10. Origami silicon optoelectronics for hemispherical electronic eye systems.

    PubMed

    Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang

    2017-11-24

    Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.

  11. Generating a Simulated Fluid Flow Over an Aircraft Surface Using Anisotropic Diffusion

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)

    2013-01-01

    A fluid-flow simulation over a computer-generated aircraft surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A pressure-gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using a pressure gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and angular difference between the diffusion-path vector and the pressure-gradient vector.

  12. Geometric Transforms for Fast Geometric Algorithms.

    DTIC Science & Technology

    1979-12-01

    representation is not an important issue in a real RAM.) For more complicated geometrical objects such as polygons, polyhedrons , and Voronoi diagrams the issue...of N disks canl be represented as a convex polyhedron in O(N log N) time. Proof: We illustrate thle construction in Figurc 3-1 1. We first embed the N...or intersection of N arbitrary planar disks by a convex polyhedron in O(N log N) time. 0. Figure 3-11: General case for intersection or union of N

  13. Population size, survival, growth, and movements of Rana sierrae

    USGS Publications Warehouse

    Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Halstead, Brian J.; Link, William

    2013-01-01

    Based on 2431 captures of 757 individual frogs over a 9-yr period, we found that the population of R. sierrae in one meadow–stream complex in Yosemite National Park ranged from an estimated 45 to 115 adult frogs. Rana sierrae at our relatively low elevation site (2200 m) grew at a fast rate (K = 0.73–0.78), had high overwintering survival rates (44.6–95%), lived a long time (up to 16 yr), and tended to be fairly sedentary during the summer (100% minimum convex polygon annual home ranges of 139 m2) but had low year-to-year site fidelity. Even though the amphibian chytrid fungus (Batrachochytrium dendrobatidis, Bd) has been present in the population for at least 13 yr, there was no clear downward trend as might be expected from reports of R. sierrae population declines associated with Bd or from reports of widespread population decline of R. sierrae throughout its range.

  14. Movements and home ranges of mountain plovers raising broods in three Colorado landscapes

    USGS Publications Warehouse

    Dreitz, V.J.; Wunder, Michael B.; Knopf, F.L.

    2005-01-01

    We report movements and home-range sizes of adult Mountain Plovers (Charadrius montanus) with broods on rangeland, agricultural fields, and prairie dog habitats in eastern Colorado. Estimates of home range size (95% fixed kernel) were similar across the three habitats: rangeland (146.1 ha ± 101.5), agricultural fields (131.6 ha ± 74.4), and prairie dog towns (243.3 ha ± 366.3). Our minimum convex polygon estimates of home-range size were comparable to those on rangeland reported by Knopf and Rupert (1996). In addition, movements—defined as the distance between consecutive locations of adults with broods—were equivalent across habitats. However, our findings on prairie dog habitat suggest that home-range size for brood rearing may be related to whether the prairie dog habitat is in a complex of towns or in an isolated town.

  15. Areal Feature Matching Based on Similarity Using Critic Method

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yu, K.

    2015-10-01

    In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.

  16. Effect of available space and previous contact in the social integration of Saint Croix and Suffolk ewes.

    PubMed

    Orihuela, A; Averós, X; Solano, J; Clemente, N; Estevez, I

    2016-03-01

    Reproduction in tropical sheep is not affected by season, whereas the reproductive cycle of temperate-climate breeds such as Suffolk depends on the photoperiod. Close contact with tropical ewes during the anestrous period might induce Suffolk ewes to cycle, making the use of artificial light or hormonal treatments unnecessary. However, the integration of both breeds within the social group would be necessary to trigger this effect, and so the aim of the experiment was to determine the speed of integration of 2 groups of Saint Croix and Suffolk ewes into a single flock, according to space allowance and previous experience. For this, 6 groups of 10 ewes (half from each breed) from both breeds, housed at 2 or 4 m/ewe (3 groups/treatment) and with or without previous contact with the other breed, were monitored for 3 d. Each observation day, the behavior, movement, and use of space of ewes were collected during 10 min at 1-h intervals between 0900 and 1400 h. Generalized linear mixed models were used to test the effects of breed, space allowance, and previous experience on behavior, movement, and use of space. Net distances, interbreed farthest neighbor distance, mean interbreed distance, and walking frequencies were greater at 4 m/ewe ( < 0.05). Intrabreed nearest neighbor, mean intrabreed neighbor, and interbreed nearest neighbor distances and minimum convex polygons at 4 m/ewe were greatest for Saint Croix ewes, whereas the opposite was found for lying down ( < 0.05). Experienced ewes showed larger intrabreed nearest neighbor distances, minimum convex polygons, and home range overlapping ( < 0.05). Experienced ewes at 4 m/ewe showed longest total distances and step lengths and greatest movement activity ( < 0.05). Experienced ewes walked longer total distances during Day 1 and 2 ( < 0.05). Lying down frequency was greater for Day 3 than Day 1 ( < 0.05), and Suffolk ewes kept longer interindividual distances during Day 1 ( < 0.05). After 3 d of cohabitation, Suffolk and Saint Croix ewes did not fully integrate into a cohesive flock, with each breed displaying specific behavioral patterns. Decreasing space allowance and previous experience resulted in limited benefits for the successful group cohesion. Longer cohabitation periods might result in complete integration, although practical implementation might be difficult.

  17. Functional Data Approximation on Bounded Domains using Polygonal Finite Elements.

    PubMed

    Cao, Juan; Xiao, Yanyang; Chen, Zhonggui; Wang, Wenping; Bajaj, Chandrajit

    2018-07-01

    We construct and analyze piecewise approximations of functional data on arbitrary 2D bounded domains using generalized barycentric finite elements, and particularly quadratic serendipity elements for planar polygons. We compare approximation qualities (precision/convergence) of these partition-of-unity finite elements through numerical experiments, using Wachspress coordinates, natural neighbor coordinates, Poisson coordinates, mean value coordinates, and quadratic serendipity bases over polygonal meshes on the domain. For a convex n -sided polygon, the quadratic serendipity elements have 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, rather than the usual n ( n + 1)/2 basis functions to achieve quadratic convergence. Two greedy algorithms are proposed to generate Voronoi meshes for adaptive functional/scattered data approximations. Experimental results show space/accuracy advantages for these quadratic serendipity finite elements on polygonal domains versus traditional finite elements over simplicial meshes. Polygonal meshes and parameter coefficients of the quadratic serendipity finite elements obtained by our greedy algorithms can be further refined using an L 2 -optimization to improve the piecewise functional approximation. We conduct several experiments to demonstrate the efficacy of our algorithm for modeling features/discontinuities in functional data/image approximation.

  18. Weighted straight skeletons in the plane☆

    PubMed Central

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We investigate weighted straight skeletons from a geometric, graph-theoretical, and combinatorial point of view. We start with a thorough definition and shed light on some ambiguity issues in the procedural definition. We investigate the geometry, combinatorics, and topology of faces and the roof model, and we discuss in which cases a weighted straight skeleton is connected. Finally, we show that the weighted straight skeleton of even a simple polygon may be non-planar and may contain cycles, and we discuss under which restrictions on the weights and/or the input polygon the weighted straight skeleton still behaves similar to its unweighted counterpart. In particular, we obtain a non-procedural description and a linear-time construction algorithm for the straight skeleton of strictly convex polygons with arbitrary weights. PMID:25648398

  19. Mathematics without... Irregular Polygons

    ERIC Educational Resources Information Center

    McLeay, Heather

    2007-01-01

    In this article, the author reflects on her session at the 2006 conference, including the learning styles and strategies she observed. In her session, they explored some of the more unusual aspects of convex polyhedra (with regular faces), including the notions of "valence" and "species". Although the session was about shape and space, there was…

  20. Extended Multiscale Image Segmentation for Castellated Wall Management

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Tsuguchi, M.; Chhatkuli, S.; Satoh, T.

    2018-05-01

    Castellated walls are positioned as tangible cultural heritage, which require regular maintenance to preserve their original state. For the demolition and repair work of the castellated wall, it is necessary to identify the individual stones constituting the wall. However, conventional approaches using laser scanning or integrated circuits (IC) tags were very time-consuming and cumbersome. Therefore, we herein propose an efficient approach for castellated wall management based on an extended multiscale image segmentation technique. In this approach, individual stone polygons are extracted from the castellated wall image and are associated with a stone management database. First, to improve the performance of the extraction of individual stone polygons having a convex shape, we developed a new shape criterion named convex hull fitness in the image segmentation process and confirmed its effectiveness. Next, we discussed the stone management database and its beneficial utilization in the repair work of castellated walls. Subsequently, we proposed irregular-shape indexes that are helpful for evaluating the stone shape and the stability of the stone arrangement state in castellated walls. Finally, we demonstrated an application of the proposed method for a typical castellated wall in Japan. Consequently, we confirmed that the stone polygons can be extracted with an acceptable level. Further, the condition of the shapes and the layout of the stones could be visually judged with the proposed irregular-shape indexes.

  1. Optical touch sensing: practical bounds for design and performance

    NASA Astrophysics Data System (ADS)

    Bläßle, Alexander; Janbek, Bebart; Liu, Lifeng; Nakamura, Kanna; Nolan, Kimberly; Paraschiv, Victor

    2013-02-01

    Touch sensitive screens are used in many applications ranging in size from smartphones and tablets to display walls and collaborative surfaces. In this study, we consider optical touch sensing, a technology best suited for large-scale touch surfaces. Optical touch sensing utilizes cameras and light sources placed along the edge of the display. Within this framework, we first find a sufficient number of cameras necessary for identifying a convex polygon touching the screen, using a continuous light source on the boundary of a circular domain. We then find the number of cameras necessary to distinguish between two circular objects in a circular or rectangular domain. Finally, we use Matlab to simulate the polygonal mesh formed from distributing cameras and light sources on a circular domain. Using this, we compute the number of polygons in the mesh and the maximum polygon area to give us information about the accuracy of the configuration. We close with summary and conclusions, and pointers to possible future research directions.

  2. How Spherical Are the Archimedean Solids and Their Duals?

    ERIC Educational Resources Information Center

    Aravind, P. K.

    2011-01-01

    The Isoperimetric Quotient, or IQ, introduced by G. Polya, characterizes the degree of sphericity of a convex solid. This paper obtains closed form expressions for the surface area and volume of any Archimedean polyhedron in terms of the integers specifying the type and number of regular polygons occurring around each vertex. Similar results are…

  3. Real-Time Generation of the Footprints both on Floor and Ground

    NASA Astrophysics Data System (ADS)

    Hirano, Yousuke; Tanaka, Toshimitsu; Sagawa, Yuji

    This paper presents a real-time method for generating various footprints in relation to state of walking. In addition, the method is expanded to cover both on hard floor and soft ground. Results of the previous method were not so realistic, because the method places same simple foot prints on the motion path. Our method runs filters on the original pattern of footprint on GPU. And then our method gradates intensity of the pattern to two directions, in order to create partially dark footprints. Here parameters of the filter and the gradation are changed by move speed and direction. The pattern is mapped on a polygon. If the walker is pigeon-toed or bandy-legged, the polygon is rotated inside or outside, respectively. Finally, it is placed on floor. Footprints on soft ground are concavity and convexity caused by walking. Thus an original pattern of footprints on ground is defined as a height map. The height map is modified using the filter and the gradation operation developed for floor footprints. The height map is converted to a bump map to fast display the concavity and convexity of footprints.

  4. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  5. Numerical procedure to determine geometric view factors for surfaces occluded by cylinders

    NASA Technical Reports Server (NTRS)

    Sawyer, P. L.

    1978-01-01

    A numerical procedure was developed to determine geometric view factors between connected infinite strips occluded by any number of infinite circular cylinders. The procedure requires a two-dimensional cross-sectional model of the configuration of interest. The two-dimensional model consists of a convex polygon enclosing any number of circles. Each side of the polygon represents one strip, and each circle represents a circular cylinder. A description and listing of a computer program based on this procedure are included in this report. The program calculates geometric view factors between individual strips and between individual strips and the collection of occluding cylinders.

  6. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  7. A Walking Method for Non-Decomposition Intersection and Union of Arbitrary Polygons and Polyhedrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, M.; Yao, J.

    We present a method for computing the intersection and union of non- convex polyhedrons without decomposition in O(n log n) time, where n is the total number of faces of both polyhedrons. We include an accompanying Python package which addresses many of the practical issues associated with implementation and serves as a proof of concept. The key to the method is that by considering the edges of the original ob- jects and the intersections between faces as walking routes, we can e ciently nd the boundary of the intersection of arbitrary objects using directional walks, thus handling the concave casemore » in a natural manner. The method also easily extends to plane slicing and non-convex polyhedron unions, and both the polyhedron and its constituent faces may be non-convex.« less

  8. Accuracy of estimating wolf summer territories by daytime locations

    USGS Publications Warehouse

    Demma, D.J.; Mech, L.D.

    2011-01-01

    We used locations of 6 wolves (Canis lupus) in Minnesota from Global Positioning System (GPS) collars to compare day-versus-night locations to estimate territory size and location during summer. We employed both minimum convex polygon (MCP) and fixed kernel (FK) methods. We used two methods to partition GPS locations for day-versus-night home-range comparisons: (1) daytime = 0800-2000 Ah; nighttime = 2000-0800 Ah; and (2) sunup versus sundown. Regardless of location-partitioning method, mean area of daytime MCPs did not differ significantly from nighttime MCPs. Similarly, mean area of daytime FKs (95% probability contour) were not significantly different from nightime FKs. FK core use areas (50% probability contour) did not differ between daytime and nighttime nor between sunup and sundown locations. We conclude that in areas similar to our study area day-only locations are adequate for describing the location, extent and core use areas of summer wolf territories by both MCP and FK methods. ?? 2011 American Midland Naturalist.

  9. Accuracy of estimating wolf summer territories by daytime locations

    USGS Publications Warehouse

    Demma, Dominic J.; Mech, L. David

    2011-01-01

    We used locations of 6 wolves (Canis lupus) in Minnesota from Global Positioning System (GPS) collars to compare day-versus-night locations to estimate territory size and location during summer. We employed both minimum convex polygon (MCP) and fixed kernel (FK) methods. We used two methods to partition GPS locations for day-versus-night home-range comparisons: (1) daytime = 0800–2000 h; nighttime = 2000–0800 h; and (2) sunup versus sundown. Regardless of location-partitioning method, mean area of daytime MCPs did not differ significantly from nighttime MCPs. Similarly, mean area of daytime FKs (95% probability contour) were not significantly different from nightime FKs. FK core use areas (50% probability contour) did not differ between daytime and nighttime nor between sunup and sundown locations. We conclude that in areas similar to our study area day-only locations are adequate for describing the location, extent and core use areas of summer wolf territories by both MCP and FK methods.

  10. The evaluation of alternate methodologies for land cover classification in an urbanizing area

    NASA Technical Reports Server (NTRS)

    Smekofski, R. M.

    1981-01-01

    The usefulness of LANDSAT in classifying land cover and in identifying and classifying land use change was investigated using an urbanizing area as the study area. The question of what was the best technique for classification was the primary focus of the study. The many computer-assisted techniques available to analyze LANDSAT data were evaluated. Techniques of statistical training (polygons from CRT, unsupervised clustering, polygons from digitizer and binary masks) were tested with minimum distance to the mean, maximum likelihood and canonical analysis with minimum distance to the mean classifiers. The twelve output images were compared to photointerpreted samples, ground verified samples and a current land use data base. Results indicate that for a reconnaissance inventory, the unsupervised training with canonical analysis-minimum distance classifier is the most efficient. If more detailed ground truth and ground verification is available, the polygons from the digitizer training with the canonical analysis minimum distance is more accurate.

  11. Preconditioning 2D Integer Data for Fast Convex Hull Computations.

    PubMed

    Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

  12. High-order polygonal discontinuous Petrov-Galerkin (PolyDPG) methods using ultraweak formulations

    NASA Astrophysics Data System (ADS)

    Vaziri Astaneh, Ali; Fuentes, Federico; Mora, Jaime; Demkowicz, Leszek

    2018-04-01

    This work represents the first endeavor in using ultraweak formulations to implement high-order polygonal finite element methods via the discontinuous Petrov-Galerkin (DPG) methodology. Ultraweak variational formulations are nonstandard in that all the weight of the derivatives lies in the test space, while most of the trial space can be chosen as copies of $L^2$-discretizations that have no need to be continuous across adjacent elements. Additionally, the test spaces are broken along the mesh interfaces. This allows one to construct conforming polygonal finite element methods, termed here as PolyDPG methods, by defining most spaces by restriction of a bounding triangle or box to the polygonal element. The only variables that require nontrivial compatibility across elements are the so-called interface or skeleton variables, which can be defined directly on the element boundaries. Unlike other high-order polygonal methods, PolyDPG methods do not require ad hoc stabilization terms thanks to the crafted stability of the DPG methodology. A proof of convergence of the form $h^p$ is provided and corroborated through several illustrative numerical examples. These include polygonal meshes with $n$-sided convex elements and with highly distorted concave elements, as well as the modeling of discontinuous material properties along an arbitrary interface that cuts a uniform grid. Since PolyDPG methods have a natural a posteriori error estimator a polygonal adaptive strategy is developed and compared to standard adaptivity schemes based on constrained hanging nodes. This work is also accompanied by an open-source $\\texttt{PolyDPG}$ software supporting polygonal and conventional elements.

  13. A 'range test' for determining scatterers with unknown physical properties

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Sylvester, John; Kusiak, Steven

    2003-06-01

    We describe a new scheme for determining the convex scattering support of an unknown scatterer when the physical properties of the scatterers are not known. The convex scattering support is a subset of the scatterer and provides information about its location and estimates for its shape. For convex polygonal scatterers the scattering support coincides with the scatterer and we obtain full shape reconstructions. The method will be formulated for the reconstruction of the scatterers from the far field pattern for one or a few incident waves. The method is non-iterative in nature and belongs to the type of recently derived generalized sampling schemes such as the 'no response test' of Luke-Potthast. The range test operates by testing whether it is possible to analytically continue a far field to the exterior of any test domain Omegatest. By intersecting the convex hulls of various test domains we can produce a minimal convex set, the convex scattering support of which must be contained in the convex hull of the support of any scatterer which produces that far field. The convex scattering support is calculated by testing the range of special integral operators for a sampling set of test domains. The numerical results can be used as an approximation for the support of the unknown scatterer. We prove convergence and regularity of the scheme and show numerical examples for sound-soft, sound-hard and medium scatterers. We can apply the range test to non-convex scatterers as well. We can conclude that an Omegatest which passes the range test has a non-empty intersection with the infinity-support (the complement of the unbounded component of the complement of the support) of the true scatterer, but cannot find a minimal set which must be contained therein.

  14. Affine invariants of convex polygons.

    PubMed

    Flusser, Jan

    2002-01-01

    In this correspondence, we prove that the affine invariants, for image registration and object recognition, proposed recently by Yang and Cohen (see ibid., vol.8, no.7, p.934-46, July 1999) are algebraically dependent. We show how to select an independent and complete set of the invariants. The use of this new set leads to a significant reduction of the computing complexity without decreasing the discrimination power.

  15. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    NASA Astrophysics Data System (ADS)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.

    2012-04-01

    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold of 0.75. Segmentation procedures were implemented and applied with criteria of homogeneous slope, convexity of the elements and maximum area of the HRUs. These tasks were implemented using a triangulation approach, applying the Triangle software, in order to dissolve the polygons according to the convexity index criteria. The automatic pre-processing was applied to two peri-urban French catchment, the Mercier and Chaudanne catchments, with 7.3 km2 and 4.1 km2 respectively. We show that the optimized mesh allows a substantial improvement of the overland flow pathways, because the segmentation procedure gives a more realistic representation of the drainage network. KEYWORDS: GRASS-GIS, Hydrological Response Units, Automatic processing, Peri-urban catchments, Geometrical Algorithms

  16. Preconditioning 2D Integer Data for Fast Convex Hull Computations

    PubMed Central

    2016-01-01

    In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221

  17. Polygonal patterned peatlands of the White Sea islands

    NASA Astrophysics Data System (ADS)

    Kutenkov, S. A.; Kozhin, M. N.; Golovina, E. O.; Kopeina, E. I.; Stoikina, N. V.

    2018-03-01

    The summits and slopes of some islands along the northeastern and northern coasts of the White Sea are covered with dried out peatlands. The thickness of the peat deposit is 30–80 cm and it is separated by troughs into gently sloping polygonal peat blocks up to 20 m2 in size. On some northern islands the peat blocks have permafrost cores. The main components of the dried out peatlands vegetation are dwarf shrubs and lichens. The peat stratigraphy reveals two stages of peatland development. On the first stage, the islands were covered with wet cottongrass carpets, which repeated the convex relief shape. On the second stage, they were occupied by the xeromorphic vegetation. We suggest that these polygonal patterned peatlands are the remnants of blanket bogs, the formation of which assumes the conditions of a much more humid climate in the historical past. The time of their active development was calculated according to the White Sea level changes and radiocarbon dates from 1000–4000 BP.

  18. Segmentation-based wavelet transform for still-image compression

    NASA Astrophysics Data System (ADS)

    Mozelle, Gerard; Seghier, Abdellatif; Preteux, Francoise J.

    1996-10-01

    In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.

  19. A path following algorithm for the graph matching problem.

    PubMed

    Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe

    2009-12-01

    We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

  20. Home range size and habitat-use pattern of nesting prairie falcons near oil developments in northeastern Wyoming

    Treesearch

    John R. Squires; Stanley H. Anderson; Robert Oakleaf

    1993-01-01

    Movements and habitat-use patterns were evaluated for a small population (n = 6 pairs) of Prairie Falcons (Falco mexicanus) nesting near Gillette, Wyoming. A total of 2462 falcon relocations was documented through telemetry. The average (n = 6) harmonic-mean 95%-contour home-range was 69 km2, whereas the average 75% contour was 26.6 km2. The convex polygon...

  1. Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems.

    PubMed

    Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal

    2018-04-06

    Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.

  2. Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems

    PubMed Central

    Genest, Daniel; Peri, Francesco; Schaaf, Crystal

    2018-01-01

    Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results. PMID:29503722

  3. Home range and residency status of Northern Goshawks breeding in Minnesota

    USGS Publications Warehouse

    Boal, C.W.; Andersen, D.E.; Kennedy, P.L.

    2003-01-01

    We used radio-telemetry to estimate breeding season home-range size of 17 male and 11 female Northern Goshawks (Accipiter gentilis) and combined home ranges of 10 pairs of breeding goshawks in Minnesota. Home-range sizes for male and female goshawks were 2593 and 2494 ha, respectively, using the minimum convex polygon, and 3927 and 5344 ha, respectively, using the 95% fixed kernel. Home ranges of male and female members of 10 goshawk pairs were smaller than combined home-range size of those pairs (mean difference = 3527 ha; 95% CI = 891 to 6164 ha). Throughout the nonbreeding season, the maximum distance from the nest recorded for all but one goshawk was 12.4 km. Goshawks breeding in Minnesota have home ranges similar to or larger than those reported in most other areas. Home-range overlap between members of breeding pairs was typically ???50%, and both members of breeding pairs were associated with breeding home ranges year round. Goshawk management plans based on estimated home-range size of individual hawks may substantially underestimate the area actually used by a nesting pair.

  4. Generating GPS activity spaces that shed light upon the mobility habits of older adults: a descriptive analysis.

    PubMed

    Hirsch, Jana A; Winters, Meghan; Clarke, Philippa; McKay, Heather

    2014-12-12

    Measuring mobility is critical for understanding neighborhood influences on older adults' health and functioning. Global Positioning Systems (GPS) may represent an important opportunity to measure, describe, and compare mobility patterns in older adults. We generated three types of activity spaces (Standard Deviation Ellipse, Minimum Convex Polygon, Daily Path Area) using GPS data from 95 older adults in Vancouver, Canada. Calculated activity space areas and compactness were compared across sociodemographic and resource characteristics. Area measures derived from the three different approaches to developing activity spaces were highly correlated. Participants who were younger, lived in less walkable neighborhoods, had a valid driver's license, had access to a vehicle, or had physical support to go outside of their homes had larger activity spaces. Mobility space compactness measures also differed by sociodemographic and resource characteristics. This research extends the literature by demonstrating that GPS tracking can be used as a valuable tool to better understand the geographic mobility patterns of older adults. This study informs potential ways to maintain older adult independence by identifying factors that influence geographic mobility.

  5. Home range characteristics of Mexican Spotted Owls in the canyonlands of Utah

    USGS Publications Warehouse

    Willey, D.W.; van Riper, Charles

    2007-01-01

    We studied home-range characteristics of adult Mexican Spotted Owls (Strix occidentalis lucida) in southern Utah. Twenty-eight adult owls were radio-tracked using a ground-based telemetry system during 1991-95. Five males and eight females molted tail feathers and dropped transmitters within 4 wk. We estimated cumulative home ranges for 15 Spotted Owls (12 males, 3 females). The mean estimate of cumulative home-range size was not statistically different between the minimum convex polygon and adaptive kernel (AK) 95% isopleth. Both estimators yielded relatively high SD, and male and female range sizes varied widely. For 12 owls tracked during both the breeding and nonbreeding seasons, the mean size of the AK 95% nonbreeding home range was 49% larger than the breeding home-range size. The median AK 75% bome-range isopleth (272 ha) we observed was similar in size to Protected Activity Centers (PACs) recommended by a recovery team. Our results lend support to the PAC concept and we support continued use of PACs to conserve Spotted Owl habitat in Utah. ?? 2007 The Raptor Research Foundation, Inc.

  6. Location of planar targets in three space from monocular images

    NASA Technical Reports Server (NTRS)

    Cornils, Karin; Goode, Plesent W.

    1987-01-01

    Many pieces of existing and proposed space hardware that would be targets of interest for a telerobot can be represented as planar or near-planar surfaces. Examples include the biostack modules on the Long Duration Exposure Facility, the panels on Solar Max, large diameter struts, and refueling receptacles. Robust and temporally efficient methods for locating such objects with sufficient accuracy are therefore worth developing. Two techniques that derive the orientation and location of an object from its monocular image are discussed and the results of experiments performed to determine translational and rotational accuracy are presented. Both the quadrangle projection and elastic matching techniques extract three-space information using a minimum of four identifiable target points and the principles of the perspective transformation. The selected points must describe a convex polygon whose geometric characteristics are prespecified in a data base. The rotational and translational accuracy of both techniques was tested at various ranges. This experiment is representative of the sensing requirements involved in a typical telerobot target acquisition task. Both techniques determined target location to an accuracy sufficient for consistent and efficient acquisition by the telerobot.

  7. Convex central configurations for the n-body problem

    NASA Astrophysics Data System (ADS)

    Xia, Zhihong

    We give a simple proof of a classical result of MacMillan and Bartky (Trans. Amer. Math. Soc. 34 (1932) 838) which states that, for any four positive masses and any assigned order, there is a convex planar central configuration. Moreover, we show that the central configurations we find correspond to local minima of the potential function with fixed moment of inertia. This allows us to show that there are at least six local minimum central configurations for the planar four-body problem. We also show that for any assigned order of five masses, there is at least one convex spatial central configuration of local minimum type. Our method also applies to some other cases.

  8. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  9. Wood industrial application for quality control using image processing

    NASA Astrophysics Data System (ADS)

    Ferreira, M. J. O.; Neves, J. A. C.

    1994-11-01

    This paper describes an application of image processing for the furniture industry. It uses an input data, images acquired directly from wood planks where defects were previously marked by an operator. A set of image processing algorithms separates and codes each defect and detects a polygonal approach of the line representing them. For such a purpose we developed a pattern classification algorithm and a new technique of segmenting defects by carving the convex hull of the binary shape representing each isolated defect.

  10. Higher order solution of the Euler equations on unstructured grids using quadratic reconstruction

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Frederickson, Paul O.

    1990-01-01

    High order accurate finite-volume schemes for solving the Euler equations of gasdynamics are developed. Central to the development of these methods are the construction of a k-exact reconstruction operator given cell-averaged quantities and the use of high order flux quadrature formulas. General polygonal control volumes (with curved boundary edges) are considered. The formulations presented make no explicit assumption as to complexity or convexity of control volumes. Numerical examples are presented for Ringleb flow to validate the methodology.

  11. Space use and movements of moose in Massachusetts: implications for conservation of large mammals in a fragmented environment

    USGS Publications Warehouse

    Wattles, David W.; DeStefano, Stephen

    2013-01-01

    Moose (Alces alces) have recently re-occupied a portion of their range in the temperate deciduous forest of the northeastern United States after a >200 year absence. In southern New England, moose encounter different forest types, more human development, and higher temperatures than in other parts of their geographic range in North America. We analyzed seasonal minimum convex polygon home ranges, utilization distributions, movement rates, and home range composition of GPS-collared moose in Massachusetts. Seasonal home range sizes were not different for males and females and were within the range reported for low latitudes elsewhere in North America. Seasonal movement patterns reflected the seasonal changes in metabolic rate and the influence of the species’ reproductive cycle and weather. Home ranges consisted almost entirely of forested habitat, included large amounts of conservation land, and had lower road densities as compared to the landscape as a whole, indicating that human development may be a limiting factor for moose in the region. The size and configuration of home ranges, seasonal movement patterns, and use relative to human development have implications for conservation of moose and other wide-ranging species in more highly developed portions of their ranges.

  12. Baird's tapir density in high elevation forests of the Talamanca region of Costa Rica.

    PubMed

    González-Maya, José F; Schipper, Jan; Polidoro, Beth; Hoepker, Annelie; Zárrate-Charry, Diego; Belant, Jerrold L

    2012-12-01

    Baird's tapir (Tapirus bairdii) is currently endangered throughout its neotropical range with an expected population decline >50% in the next 30 years. We present the first density estimation of Baird's tapir for the Talamanca mountains of Costa Rica, and one of the first for the country. Ten stations with paired cameras were established in Valle del Silencio within Parque Internacional La Amistad (PILA). Seventy-seven tapir pictures of 15 individuals comprising 25 capture-recapture events were analyzed using mark-recapture techniques. The 100% minimum convex polygon of the sampled area was 5.7 km(2) and the effective sampled area using half mean maximum distances moved by tapirs was 7.16 km(2) . We estimated a tapir density of 2.93 individuals/km(2) which represents the highest density reported for this species. Intermountain valleys can represent unique and important habitats for large mammal species. However, the extent of isolation of this population, potentially constrained by steep slopes of the cordillera, remains unknown. Further genetic and movement studies are required to understand meta-population dynamics and connectivity between lowland and highland areas for Baird's tapir conservation in Costa Rica. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  13. Facility Layout Problems Using Bays: A Survey

    NASA Astrophysics Data System (ADS)

    Davoudpour, Hamid; Jaafari, Amir Ardestani; Farahani, Leila Najafabadi

    2010-06-01

    Layout design is one of the most important activities done by industrial Engineers. Most of these problems have NP hard Complexity. In a basic layout design, each cell is represented by a rectilinear, but not necessarily convex polygon. The set of fully packed adjacent polygons is known as a block layout (Asef-Vaziri and Laporte 2007). Block layout is divided by slicing tree and bay layout. In bay layout, departments are located in vertical columns or horizontal rows, bays. Bay layout is used in real worlds especially in concepts such as semiconductor and aisles. There are several reviews in facility layout; however none of them focus on bay layout. The literature analysis given here is not limited to specific considerations about bay layout design. We present a state of art review for bay layout considering some issues such as the used objectives, the techniques of solving and the integration methods in bay.

  14. Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.

    PubMed

    Wang, Charlie C L; Manocha, Dinesh

    2013-01-01

    We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.

  15. Multiwavelength micromirrors in the cuticle of scarab beetle Chrysina gloriosa.

    PubMed

    Agez, Gonzague; Bayon, Chloé; Mitov, Michel

    2017-01-15

    Beetles from the genus Chrysina show vivid reflections from bright green to metallic silver-gold as a consequence of the cholesteric liquid crystal organization of chitin molecules. Particularly, the cuticle of Chrysina gloriosa exhibits green and silver stripes. By combining confocal microscopy and spectrophotometry, scanning electron microscopy and numerical simulations, the relationship between the reflectance and the structural parameters for both stripes at the micro- and nanoscales are established. Over the visible and near IR spectra, polygonal cells in tessellated green stripes behave as multiwavelength selective micro-mirrors and the silver stripes as specular broadband mirrors. Thermoregulation, conspecifics or intra-species communication, or camouflage against predators are discussed as possible functions. As a prerequisite to bio-inspired artificial replicas, the physical characteristics of the polygonal texture in Chrysina gloriosa cuticle are compared to their equivalents in synthetic cholesteric oligomers and their fundamental differences are ascertained. It is shown that the cuticle has concave cells whereas the artificial films have convex cells, contrary to expectation and assumption in the literature. The present results may provide inspiration for fabricating multiwavelength selective micromirrors or spatial wavelength-specific light modulators. Many insects own a tessellated carapace with bumps, pits or indentations. Little is known on the physical properties of these geometric variations and biological functions are unknown or still debated. We show that the polygonal cells in scarab beetle Chrysina gloriosa behave as multiwavelength selective micromirrors over the visible and infrared spectra, with a variety of spatial patterns. In the context of biomimetic materials, we demonstrate that the carapace has concave cells whereas the artificial films have convex cells, contrary to expectation in the literature. Thermoregulation, communication or camouflage are discussed as advanced functions. Results may provide inspiration for fabricating spatial wavelength-specific light modulators and optical packet switching in routing technologies. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. A framework for global terrain classification using 250-m DEMs to predict geohazards

    NASA Astrophysics Data System (ADS)

    Iwahashi, J.; Matsuoka, M.; Yong, A.

    2016-12-01

    Geomorphology is key for identifying factors that control geohazards induced by landslides, liquefaction, and ground shaking. To systematically identify landforms that affect these hazards, Iwahashi and Pike (2007; IP07) introduced an automated terrain classification scheme using 1-km-scale Shuttle Radar Topography Mission (SRTM) digital elevation models (DEMs). The IP07 classes describe 16 categories of terrain types and were used as a proxy for predicting ground motion amplification (Yong et al., 2012; Seyhan et al., 2014; Stewart et al., 2014; Yong, 2016). These classes, however, were not sufficiently resolved because coarse-scaled SRTM DEMs were the basis for the categories (Yong, 2016). Thus, we develop a new framework consisting of more detailed polygonal global terrain classes to improve estimations of soil-type and material stiffness. We first prepare high resolution 250-m DEMs derived from the 2010 Global Multi-resolution Terrain Elevation Data (GMTED2010). As in IP07, we calculate three geometric signatures (slope, local convexity and surface texture) from the DEMs. We create additional polygons by using the same signatures and multi-resolution segmentation techniques on the GMTED2010. We consider two types of surface texture thresholds in different window sizes (3x3 and 13x13 pixels), in addition to slope and local convexity, to classify pixels within the DEM. Finally, we apply the k-means clustering and thresholding methods to the 250-m DEM and produce more detailed polygonal terrain classes. We compare the new terrain classification maps of Japan and California with geologic, aerial photography, and landslide distribution maps, and visually find good correspondence of key features. To predict ground motion amplification, we apply the Yong (2016) method for estimating VS30. The systematic classification of geomorphology has the potential to provide a better understanding of the susceptibility to geohazards, which is especially vital in populated areas.

  17. Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2016-01-01

    In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Calculus domains modelled using an original bool algebra based on polygons

    NASA Astrophysics Data System (ADS)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2016-08-01

    Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livine, Etera R.

    We introduce the set of framed (convex) polyhedra with N faces as the symplectic quotient C{sup 2N}//SU(2). A framed polyhedron is then parametrized by N spinors living in C{sup 2} satisfying suitable closure constraints and defines a usual convex polyhedron plus extra U(1) phases attached to each face. We show that there is a natural action of the unitary group U(N) on this phase space, which changes the shape of faces and allows to map any (framed) polyhedron onto any other with the same total (boundary) area. This identifies the space of framed polyhedra to the Grassmannian space U(N)/ (SU(2)×U(N−2)).more » We show how to write averages of geometrical observables (polynomials in the faces' area and the angles between them) over the ensemble of polyhedra (distributed uniformly with respect to the Haar measure on U(N)) as polynomial integrals over the unitary group and we provide a few methods to compute these integrals systematically. We also use the Itzykson-Zuber formula from matrix models as the generating function for these averages and correlations. In the quantum case, a canonical quantization of the framed polyhedron phase space leads to the Hilbert space of SU(2) intertwiners (or, in other words, SU(2)-invariant states in tensor products of irreducible representations). The total boundary area as well as the individual face areas are quantized as half-integers (spins), and the Hilbert spaces for fixed total area form irreducible representations of U(N). We define semi-classical coherent intertwiner states peaked on classical framed polyhedra and transforming consistently under U(N) transformations. And we show how the U(N) character formula for unitary transformations is to be considered as an extension of the Itzykson-Zuber to the quantum level and generates the traces of all polynomial observables over the Hilbert space of intertwiners. We finally apply the same formalism to two dimensions and show that classical (convex) polygons can be described in a similar fashion trading the unitary group for the orthogonal group. We conclude with a discussion of the possible (deformation) dynamics that one can define on the space of polygons or polyhedra. This work is a priori useful in the context of discrete geometry but it should hopefully also be relevant to (loop) quantum gravity in 2+1 and 3+1 dimensions when the quantum geometry is defined in terms of gluing of (quantized) polygons and polyhedra.« less

  20. Maximally dense packings of two-dimensional convex and concave noncircular particles.

    PubMed

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London) 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space R(d). While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and "moonlike" shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures.

  1. Maximally dense packings of two-dimensional convex and concave noncircular particles

    NASA Astrophysics Data System (ADS)

    Atkinson, Steven; Jiao, Yang; Torquato, Salvatore

    2012-09-01

    Dense packings of hard particles have important applications in many fields, including condensed matter physics, discrete geometry, and cell biology. In this paper, we employ a stochastic search implementation of the Torquato-Jiao adaptive-shrinking-cell (ASC) optimization scheme [Nature (London)NATUAS0028-083610.1038/nature08239 460, 876 (2009)] to find maximally dense particle packings in d-dimensional Euclidean space Rd. While the original implementation was designed to study spheres and convex polyhedra in d≥3, our implementation focuses on d=2 and extends the algorithm to include both concave polygons and certain complex convex or concave nonpolygonal particle shapes. We verify the robustness of this packing protocol by successfully reproducing the known putative optimal packings of congruent copies of regular pentagons and octagons, then employ it to suggest dense packing arrangements of congruent copies of certain families of concave crosses, convex and concave curved triangles (incorporating shapes resembling the Mercedes-Benz logo), and “moonlike” shapes. Analytical constructions are determined subsequently to obtain the densest known packings of these particle shapes. For the examples considered, we find that the densest packings of both convex and concave particles with central symmetry are achieved by their corresponding optimal Bravais lattice packings; for particles lacking central symmetry, the densest packings obtained are nonlattice periodic packings, which are consistent with recently-proposed general organizing principles for hard particles. Moreover, we find that the densest known packings of certain curved triangles are periodic with a four-particle basis, and we find that the densest known periodic packings of certain moonlike shapes possess no inherent symmetries. Our work adds to the growing evidence that particle shape can be used as a tuning parameter to achieve a diversity of packing structures.

  2. Design optimization of the S-frame to improve crashworthiness

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Tian; Tong, Ze-Qi; Tang, Zhi-Liang; Zhang, Zong-Hua

    2014-08-01

    In this paper, the S-frames, the front side rail structures of automobile, were investigated for crashworthiness. Various cross-sections including regular polygon, non-convex polygon and multi-cell with inner stiffener sections were investigated in terms of energy absorption of S-frames. It was determined through extensive numerical simulation that a multi-cell S-frame with double vertical internal stiffeners can absorb more energy than the other configurations. Shape optimization was also carried out to improve energy absorption of the S-frame with a rectangular section. The center composite design of experiment and the sequential response surface method (SRSM) were adopted to construct the approximate design sub-problem, which was then solved by the feasible direction method. An innovative double S-frame was obtained from the optimal result. The optimum configuration of the S-frame was crushed numerically and more plastic hinges as well as shear zones were observed during the crush process. The energy absorption efficiency of the structure with the optimal configuration was improved compared to the initial configuration.

  3. Characteristics of a ringtail (Bassariscus astutus) population in Trans Pecos, Texas

    USGS Publications Warehouse

    Ackerson, B.K.; Harveson, L.A.

    2006-01-01

    Despite the common occurrence of ringtails (Bassariscus astutus) few studies have been conducted to assess population characteristics. The objectives of this study were to determine (1) habitat selection, (2) home range, (3) denning characteristics, and (4) food habits of ringtails in the Trans Pecos region of west Texas. Seventeen ringtails were captured between November 1999 and January 2001 using Havahart live box traps. Second- and third-order habitat selection was determined for a ringtail population using range sites, slope, elevation, and vegetation communities. Diets were determined from volumetric scat analysis. The mean summer and winter range sizes (100% Minimum Convex Polygon [MCP]) for ringtails (n = 5) were 0.28 ?? 0.163 km2 and 0.63 ?? 0.219 km2, respectively. Overlap between ringtail ranges averaged 33.3%. Ringtails preferred catclaw (Mimosa biuncifera), persimmon (Diospyros texana), oak (Quercus sp.) bottom and catclaw/goldeneye (Viguiera stenoloba), sideoats (Bouteloua curtipendula) slope communities. Rock dens were used exclusively by ringtails, with 80.6% of dens found on slopes between 30-60%. Plant (seeds and miscellaneous vegetation) and animal material were found in 74.6 and 86.6% of scats, respectively. Findings suggest that ringtails in Trans Pecos, Texas, are an important component of the ecosystem and that management practices should conserve canyon habitats and adjacent slopes for ringtails.

  4. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  5. Fidelity and persistence of Ring-billed (Larus delawarensis) and Herring (Larus argentatus) gulls to wintering sites

    USGS Publications Warehouse

    Clark, Daniel E.; Koenen, Kiana K. G.; Whitney, Jillian J.; MacKenzie, Kenneth G.; DeStefano, Stephen

    2016-01-01

    While the breeding ecology of gulls (Laridae) has been well studied, their movements and spatial organization during the non-breeding season is poorly understood. The seasonal movements, winter-site fidelity, and site persistence of Ring-billed (Larus delawarensis) and Herring (L. argentatus) gulls to wintering areas were studied from 2008–2012. Satellite transmitters were deployed on Ring-billed Gulls (n = 21) and Herring Gulls (n = 14). Ten Ring-billed and six Herring gulls were tracked over multiple winters and > 300 wing-tagged Ring-billed Gulls were followed to determine winter-site fidelity and persistence. Home range overlap for individuals between years ranged between 0–1.0 (95% minimum convex polygon) and 0.31–0.79 (kernel utilization distributions). Ringbilled and Herring gulls remained at local wintering sites during the non-breeding season from 20–167 days and 74–161 days, respectively. The probability of a tagged Ring-billed Gull returning to the same site in subsequent winters was high; conversely, there was a low probability of a Ring-billed Gull returning to a different site. Ring-billed and Herring gulls exhibited high winter-site fidelity, but exhibited variable site persistence during the winter season, leading to a high probability of encountering the same individuals in subsequent winters.

  6. Patterns of species richness and the center of diversity in modern Indo-Pacific larger foraminifera.

    PubMed

    Förderer, Meena; Rödder, Dennis; Langer, Martin R

    2018-05-29

    Symbiont-bearing Larger Benthic Foraminifera (LBF) are ubiquitous components of shallow tropical and subtropical environments and contribute substantially to carbonaceous reef and shelf sediments. Climate change is dramatically affecting carbonate producing organisms and threatens the diversity and structural integrity of coral reef ecosystems. Recent invertebrate and vertebrate surveys have identified the Coral Triangle as the planet's richest center of marine life delineating the region as a top priority for conservation. We compiled and analyzed extensive occurrence records for 68 validly recognized species of LBF from the Indian and Pacific Ocean, established individual range maps and applied Minimum Convex Polygon (MCP) and Species Distribution Model (SDM) methodologies to create the first ocean-wide species richness maps. SDM output was further used for visualizing latitudinal and longitudinal diversity gradients. Our findings provide strong support for assigning the tropical Central Indo-Pacific as the world's species-richest marine region with the Central Philippines emerging as the bullseye of LBF diversity. Sea surface temperature and nutrient content were identified as the most influential environmental constraints exerting control over the distribution of LBF. Our findings contribute to the completion of worldwide research on tropical marine biodiversity patterns and the identification of targeting centers for conservation efforts.

  7. Habitat use and home range of the Laysan Teal on Laysan Island, Hawaii

    USGS Publications Warehouse

    Reynolds, M.H.

    2004-01-01

    The 24-hour habitat use and home range of the Laysan Teal (Anas laysanensis), an endemic dabbling duck in Hawaii, was studied using radio telemetry during 1998-2000. Radios were retained for a mean of 40 days (0-123 d; 73 adult birds radio-tagged). Comparisons of daily habitat use were made for birds in the morning, day, evening, and night. Most birds showed strong evidence of selective habitat use. Adults preferred the terrestrial vegetation (88%), and avoided the lake and wetlands during the day. At night, 63% of the birds selected the lake and wetlands. Nocturnal habitat use differed significantly between the non-breeding and breeding seasons, while the lake and wetland habitats were used more frequently during the non-breeding season. Most individuals showed strong site fidelity during the study, but habitat selection varied between individuals. Mean home range size was 9.78 ha (SE ?? 2.6) using the fixed kernel estimator (95% kernel; 15 birds, each with >25 locations). The average minimum convex polygon size was 24 ha (SE ?? 5.6). The mean distance traveled between tracking locations was 178 m (SE ?? 30-5), with travel distances between points ranging up to 1,649 m. Tracking duration varied from 31-121 days per bird (mean tracking duration 75 days).

  8. Characterizing Time Series Data Diversity for Wind Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong

    Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less

  9. Global terrain classification using Multiple-Error-Removed Improved-Terrain (MERIT) to address susceptibility of landslides and other geohazards

    NASA Astrophysics Data System (ADS)

    Iwahashi, J.; Yamazaki, D.; Matsuoka, M.; Thamarux, P.; Herrick, J.; Yong, A.; Mital, U.

    2017-12-01

    A seamless model of landform classifications with regional accuracy will be a powerful platform for geophysical studies that forecast geologic hazards. Spatial variability as a function of landform on a global scale was captured in the automated classifications of Iwahashi and Pike (2007) and additional developments are presented here that incorporate more accurate depictions using higher-resolution elevation data than the original 1-km scale Shuttle Radar Topography Mission digital elevation model (DEM). We create polygon-based terrain classifications globally by using the 280-m DEM interpolated from the Multi-Error-Removed Improved-Terrain DEM (MERIT; Yamazaki et al., 2017). The multi-scale pixel-image analysis method, known as Multi-resolution Segmentation (Baatz and Schäpe, 2000), is first used to classify the terrains based on geometric signatures (slope and local convexity) calculated from the 280-m DEM. Next, we apply the machine learning method of "k-means clustering" to prepare the polygon-based classification at the globe-scale using slope, local convexity and surface texture. We then group the divisions with similar properties by hierarchical clustering and other statistical analyses using geological and geomorphological data of the area where landslides and earthquakes are frequent (e.g. Japan and California). We find the 280-m DEM resolution is only partially sufficient for classifying plains. We nevertheless observe that the categories correspond to reported landslide and liquefaction features at the global scale, suggesting that our model is an appropriate platform to forecast ground failure. To predict seismic amplification, we estimate site conditions using the time-averaged shear-wave velocity in the upper 30-m (VS30) measurements compiled by Yong et al. (2016) and the terrain model developed by Yong (2016; Y16). We plan to test our method on finer resolution DEMs and report our findings to obtain a more globally consistent terrain model as there are known errors in DEM derivatives at higher-resolutions. We expect the improvement in DEM resolution (4 times greater detail) and the combination of regional and global coverage will yield a consistent dataset of polygons that have the potential to improve relations to the Y16 estimates significantly.

  10. A Polynomial-Based Nonlinear Least Squares Optimized Preconditioner for Continuous and Discontinuous Element-Based Discretizations of the Euler Equations

    DTIC Science & Technology

    2014-01-01

    system (here using left- preconditioning ) (KÃ)x = Kb̃, (3.1) where K is a low-order polynomial in à given by K = s(Ã) = m∑ i=0 kià i, (3.2) and has a... system with a complex spectrum, region E in the complex plane must be some convex form (e.g., an ellipse or polygon) that approximately encloses the...preconditioners with p = 2 and p = 20 on the spectrum of the preconditioned system matrices Kà and KH̃ for both CG Schur-complement form and DG form cases

  11. [Displacements of the green iguana (Iguana iguana) (Squamata: Iguanidae) during the dry season in La Palma, Veracruz, Mexico].

    PubMed

    Morales-Mávil, Jorge E; Vogt, Richard C; Gadsden-Esparza, Héctor

    2007-06-01

    The green iguana (Iguana iguana) is said to be primarily sedentary, although the females travel long distances to nest. Displacement patterns must be known to help predict the effects of environmental disturbance on iguanas' survival. We studied nesting season (February-July) movements in La Palma, Los Tuxtlas, Veracruz, Mexico (18 degrees 33' N, 95 degrees 03' W). Individual movements and activity were monitored by radio tracking. The transmitters were implanted surgically in eight adult iguanas (four males and four females). Snout vent length (SVL) was used to determine the relationship between size of the body and size of home range. To estimate the size of home range, three or more points were used. Minimum convex polygons estimates of home range were calculated with McPAAL. The iguanas were radio-located between 23 and 30 occasions, mainly in trees (56% between 3-9 m); only 4% were localized under a height of 3 m (forest floor). The occupation area mean was larger for males (9,158.06+/-3,025.3 m2 vs. 6,591.24+/-4,001.1 m2) although the differences were not significant (t= 0.51, p>0.05). SVL was correlated with home range (r= 0.76; gl= 7; p<0.05). Breeding males defended their home range vigorously against other adult males. We observed one separate male home range and large portions of overlap between the sexes. The home range generally formed a conglomerate of polygons and only two had linear shapes along the river: apparently iguanas use the riparian vegetation for foraging. The females display two strategies for nesting: 1) moving to the sandy area near the sea or, 2) laying eggs near the river, in loam. Iguanas responded to habitat fragmentation and reduction by modifying their nesting strategy.

  12. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  13. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  14. Infrared thermography of the udder surface of dairy cattle: characteristics, methods, and correlation with rectal temperature.

    PubMed

    Metzner, Moritz; Sauter-Louis, Carola; Seemueller, Andrea; Petzl, Wolfram; Klee, Wolfgang

    2014-01-01

    Thermograms of the caudal udder surface were taken of five healthy cows before and after inoculation of Escherichia coli into the right hind quarter. Images in clinically normal udder quarters from cows without fever (CN) were compared with those post inoculation when cows had fever (⩾ 39.5°C) and showed elevation of somatic cell counts (⩾ 400,000 cells/mL) in the inoculated quarter (CM). Using graphic software tools, different geometric analysis tools (GATs: polygons, rectangles, lines) were set within the thermographic images. The following descriptive parameters (DPs) were employed: minimum value ('min'), maximum value ('max'), range ('max-min'), and arithmetic mean ('am'). Surface temperatures in group CN were between 34.1°C ('polygons'/'min') and 37.9°C ('polygons'/'max'), and in group CM between 34.5°C ('polygons'/'min') and 40.0°C ('polygons'/'max'). The greatest differences in the temperatures between CN and CM (2.06°C) were found in 'polygons' and 'rectangles' using 'max'. The smallest coefficient of variation in triplicate determinations was found in GAT 'polygons' with DP 'max' (Tmax) (0.15%), and the relationship to the rectal body temperature (Tr) could be described by Tr=5.68+0.874*Tmax. The results show that significant changes can be displayed best using the GAT 'polygons' and the DP 'max'. These methods should be considered for automated monitoring of udder health in dairy cows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Novel dynamic peak and distribution plantar pressure measures on diabetic patients during walking.

    PubMed

    Al-Angari, Haitham M; Khandoker, Ahsan H; Lee, Sungmun; Almahmeed, Wael; Al Safar, Habiba S; Jelinek, Herbert F; Khalaf, Kinda

    2017-01-01

    Diabetic peripheral neuropathy (DPN) is a common complication leading to foot ulceration and amputation. Several kinematic, kinetic and plantar pressure measures have been proposed for DPN detection, however findings have been inconsistent. In this work, we present new shape features that capture variations in the plantar pressure using shape and entropy measures to the study of patients with retinopathy, DPN and nephropathy, and a control diabetic group with no complications. The change in the peak plantar pressure (PPP) position with each step for both feet was represented as a convex polygon, asymmetry index, area of the convex polygon, 2nd wavelet moment (WM2) and sample entropy (SamEn). WM2 and the SamEn were more sensitive in capturing variations due to presence of complications than the area and asymmetry measures. WM2 of the left heel (median: 1st IQ, 3rd IQ): 8.27 (4.6,14.8) and left forefoot: 9.2 (2.4,16) were significantly lower for the DPN group compared to the control (CONT) group (heel 11.9 (5.0,16.4); forefoot: 10.3 (4.4,21.3), p < 0.05). SamEn for the DPN group was significantly lower in the right foot compared to the left foot (1.3 (1.26, 1.37) and 1.33 (1.26,1.4), p < 0.01) compared to CONT (right foot: 1.37 (1.24,1.45) and left foot: 1.34 (1.25,1.42), P < 0.05). These new shape and regularity features have shown promising results in detecting diabetic peripheral neuropathy and warrant further investigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Lithology-dependent minimum horizontal stress and in-situ stress estimate

    NASA Astrophysics Data System (ADS)

    Zhang, Yushuai; Zhang, Jincai

    2017-04-01

    Based on the generalized Hooke's law with coupling stresses and pore pressure, the minimum horizontal stress is solved with assumption that the vertical, minimum and maximum horizontal stresses are in equilibrium in the subsurface formations. From this derivation, we find that the uniaxial strain method is the minimum value or lower bound of the minimum stress. Using Anderson's faulting theory and this lower bound of the minimum horizontal stress, the coefficient of friction of the fault is derived. It shows that the coefficient of friction may have a much smaller value than what it is commonly assumed (e.g., μf = 0.6-0.7) for in-situ stress estimate. Using the derived coefficient of friction, an improved stress polygon is drawn, which can reduce the uncertainty of in-situ stress calculation by narrowing the area of the conventional stress polygon. It also shows that the coefficient of friction of the fault is dependent on lithology. For example, if the formation in the fault is composed of weak shales, then the coefficient of friction of the fault may be small (as low as μf = 0.2). This implies that this fault is weaker and more likely to have shear failures than the fault composed of sandstones. To avoid the weak fault from shear sliding, it needs to have a higher minimum stress and a lower shear stress. That is, the critically stressed weak fault maintains a higher minimum stress, which explains why a low shear stress appears in the frictionally weak fault.

  17. Maxis-A rezoning and remapping code in two dimensional cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Lin, Zhiwei; Jiang, Shaoen; Zhang, Lu; Kuang, Longyu; Li, Hang

    2018-06-01

    This paper presents the new version of our code Maxis (Lin et al., 2011). Maxis is a local rezoning and remapping code in two dimensional cylindrical geometry, which can be employed to address the grid distortion problem of unstructured meshes. The new version of Maxis is mostly programmed in the C language which considerably improves its computational efficiency with respect to the former Matlab version. A new algorithm for determining the intersection of two arbitrary convex polygons is also incorporated into the new version. Some additional linking functions are further provided in the new version for the purpose of combining Maxis and MULTI2D.

  18. Storm Physics and Lightning Properties over Northern Alabama during DC3

    NASA Astrophysics Data System (ADS)

    Matthee, R.; Carey, L. D.; Bain, A. L.

    2013-12-01

    The Deep Convective Clouds and Chemistry (DC3) experiment seeks to examine the relationship between deep moist convection (DMC) and the production of nitrogen oxides (NOx) via lightning (LNOx). The focus of this study will be to examine integrated storm microphysics and lightning properties of DMC across northern Alabama (NA) during the DC3 campaign through use of polarimetric radar [UAHuntsville's Advanced Radar for Meteorological and Operational Radar (ARMOR)] and lightning mapping [National Aeronautical and Space Administration's (NASA) north Alabama Lightning Mapping Array (NA LMA)] platforms. Specifically, ARMOR and NA LMA are being used to explore the ability of radar inferred microphysical (e.g., ice mass, graupel volume) measurements to parameterize flash rates (F) and flash area for estimation of LNOX production in cloud resolving models. The flash area was calculated by using the 'convex hull' method. This method essentially draws a polygon around all the sources that comprise a flash. From this polygon, the convex hull area that describes the minimum polygon that circumscribes the flash extent is calculated. Two storms have been analyzed so far; one on 21 May 2012 (S1) and another on 11 June 2012 (S2), both of which were aircraft-penetrated during DC3. For S1 and S2, radar reflectivity (Z) estimates of precipitation ice mass (M) within the mixed-phase zone (-10°C to -40°C) were well correlated to the trend of lightning flash rate. However, a useful radar-based F parameterization must provide accurate quantification of rates in addition to proper trends. The difference reflectivity was used to estimate Z associated with ice and then a single Z-M relation was employed to calculate M in the mixed-phase zone. Using this approach it was estimated that S1 produced an order of magnitude greater M, but produced about a third of the total amount of flashes compared to S2. Expectations based on the non-inductive charging (NIC) theory suggest that the M-to-F ratio (M/F) should be stable from storm-to-storm, amongst other factors, all else being equal. Further investigation revealed that the mean mixed-phase Z was 11 dB higher in S1 compared to S2, suggesting larger diameters and lower concentrations of ice particles in S1. Reduction by an order of magnitude of the intercept parameter (N0) of an assumed exponential ice particle size distribution within the Z-M relation for S1 resulted in a proportional reduction in S1's inferred M and therefore a more comparable M/F ratio between the storms. Flash statistics between S1 and S2 revealed the following: S1 produced 1.92 flashes/minute and a total of 102 flashes, while S2 produced 3.45 flashes/minute and a total of 307 flashes. On average, S1 (S2) produced 212 (78) sources per flash and an average flash area of 89.53 km2 (53.85 km2). Thus, S1 produced fewer flashes, a lower F, but more sources per flash and larger flash areas as compared to S2. Ongoing analysis is exploring the tuning of N0 within the Z-M relation by the mean Z in the mixed-phase zone. The suitability of various M estimates and other radar properties (graupel volume, ice fluxes, anvil ice mass) for parameterizing F, flash area and LNOX will be investigated on different storm types across NA.

  19. Home range and use of habitat of western yellow-billed cuckoos on the middle Rio Grande, New Mexico

    USGS Publications Warehouse

    Sechrist, Juddson; Ahlers, Darrell; Potak Zehfuss, Katherine; Doster, Robert; Paxton, Eben H.; Ryan, Vicky M.

    2013-01-01

    The western yellow-billed cuckoo (Coccyzus americanus occidentalis) is a Distinct Population Segment that has been proposed for listing under the Endangered Species Act, yet very little is known about its spatial use on the breeding grounds. We implemented a study, using radio telemetry, of home range and use of habitat for breeding cuckoos along the Middle Rio Grande in central New Mexico in 2007 and 2008. Nine of 13 cuckoos were tracked for sufficient time to generate estimates of home range. Overall size of home ranges for the 2 years was 91 ha for a minimum-convex-polygon estimate and 62 ha for a 95%-kernel-home-range estimate. Home ranges varied considerably among individuals, highlighting variability in spatial use by cuckoos. Additionally, use of habitat differed between core areas and overall home ranges, but the differences were nonsignificant. Home ranges calculated for western yellow-billed cuckoos on the Middle Rio Grande are larger than those in other southwestern riparian areas. Based on calculated home ranges and availability of riparian habitat in the study area, we estimate that the study area is capable of supporting 82-99 nonoverlapping home ranges of cuckoos. Spatial data from this study should contribute to the understanding of the requirements of area and habitat of this species for management of resources and help facilitate recovery if a listing occurs.

  20. An Innovative Context-Based Crystal-Growth Activity Space Method for Environmental Exposure Assessment: A Study Using GIS and GPS Trajectory Data Collected in Chicago.

    PubMed

    Wang, Jue; Kwan, Mei-Po; Chai, Yanwei

    2018-04-09

    Scholars in the fields of health geography, urban planning, and transportation studies have long attempted to understand the relationships among human movement, environmental context, and accessibility. One fundamental question for this research area is how to measure individual activity space, which is an indicator of where and how people have contact with their social and physical environments. Conventionally, standard deviational ellipses, road network buffers, minimum convex polygons, and kernel density surfaces have been used to represent people's activity space, but they all have shortcomings. Inconsistent findings of the effects of environmental exposures on health behaviors/outcomes suggest that the reliability of existing studies may be affected by the uncertain geographic context problem (UGCoP). This paper proposes the context-based crystal-growth activity space as an innovative method for generating individual activity space based on both GPS trajectories and the environmental context. This method not only considers people's actual daily activity patterns based on GPS tracks but also takes into account the environmental context which either constrains or encourages people's daily activity. Using GPS trajectory data collected in Chicago, the results indicate that the proposed new method generates more reasonable activity space when compared to other existing methods. This can help mitigate the UGCoP in environmental health studies.

  1. Use of satellite telemetry for study of a gyrfalcon in Greenland

    USGS Publications Warehouse

    Klugman, S.S.; Fuller, M.R.; Howey, P.W.; Yates, M.A.; Oar, J.J.; Seegar, J.M.; Seegar, W.S.; Mattox, G.M.; Maechtle, T.L.

    1993-01-01

    Long-term research in Greenland has yielded 1 8 years of incidental sightings and 2 years of surveys and observations of gyrfalcons(Falco rusticolus) around Sondrestromfjord, Greenland. Gyrfalcons nest on cliffs along fjords and near rivers and lakes throughout our 2590 sq. km study area. Nestlings are present mid-June to July. In 1990, we marked one adult female gyrfalcon with a 65 g radio-transmitter to obtain location estimates via the ARGOS polar orbiting satellite system. The unit transmitted 8 hours/day every two days. We obtained 145 locations during 5 weeks of the nestling and fledgling stage of breeding. We collected 1-9 locations/day, with a mean of 4/day. We calculated home range estimates based on the Minimum Convex Polygon( MCP) and Harmonic Mean (HM methods and tested subsets of the data based on location quality and number of transmission hours per day. Home range estimated by MCP using higher quality locations was approximately 589 sq. km. Home range estimates were larger when lower-quality locations were included in the estimates. Estimates based on data collected for 4 hours/day were similar to those for 8 hours/day. In the future, it might be possible to extend battery life of the transmitters by reducing the number of transmission hours/day. A longer-lived transmitter could provide information on movements and home ranges throughout the year.

  2. Modelling ranging behaviour of female orang-utans: a case study in Tuanan, Central Kalimantan, Indonesia.

    PubMed

    Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P

    2010-04-01

    Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.

  3. An Innovative Context-Based Crystal-Growth Activity Space Method for Environmental Exposure Assessment: A Study Using GIS and GPS Trajectory Data Collected in Chicago

    PubMed Central

    Chai, Yanwei

    2018-01-01

    Scholars in the fields of health geography, urban planning, and transportation studies have long attempted to understand the relationships among human movement, environmental context, and accessibility. One fundamental question for this research area is how to measure individual activity space, which is an indicator of where and how people have contact with their social and physical environments. Conventionally, standard deviational ellipses, road network buffers, minimum convex polygons, and kernel density surfaces have been used to represent people’s activity space, but they all have shortcomings. Inconsistent findings of the effects of environmental exposures on health behaviors/outcomes suggest that the reliability of existing studies may be affected by the uncertain geographic context problem (UGCoP). This paper proposes the context-based crystal-growth activity space as an innovative method for generating individual activity space based on both GPS trajectories and the environmental context. This method not only considers people’s actual daily activity patterns based on GPS tracks but also takes into account the environmental context which either constrains or encourages people’s daily activity. Using GPS trajectory data collected in Chicago, the results indicate that the proposed new method generates more reasonable activity space when compared to other existing methods. This can help mitigate the UGCoP in environmental health studies. PMID:29642530

  4. An Origami Approximation to the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  5. Thermal tolerance of the invasive Belonesox belizanus, pike killifish, throughout ontogeny.

    PubMed

    Kerfoot, James Roy

    2012-06-01

    The goal of this study was to characterize the variability of thermal tolerances between life-history stages of the invasive Belonesox belizanus and attempt to describe the most likely stage of dispersal across south Florida. In the laboratory, individuals were acclimated to three temperatures (20, 25, or 30°C). Upper and lower lethal thermal limits and temperatures at which feeding ceased were measured for neonates, juveniles, and adults. Thermal tolerance polygons were developed to represent the thermal tolerance range of each life-history stage. Results indicated that across acclimation temperatures upper lethal thermal limits were similar for all three stages (38°C). However, minimum lethal thermal limits were significantly different at the 30°C acclimation temperature, where juveniles (9°C) had an approximately 2.0°C and 4.0°C lower minimum lethal thermal limit compared with adults and neonates, respectively. According to thermal tolerance polygons, juveniles had an average tolerance polygonal area almost 20°C(2) larger than adults, indicating the greatest thermal tolerance of the three life-history stages. Variation in cessation of feeding temperatures indicated no significant difference between juveniles and adults. Overall, results of this study imply that juvenile B. belizanus may be equipped with the physiological flexibility to exercise habitat choice and reduce potential intraspecific competition with adults for limited food resources. Given its continued dispersal, the minimum thermal limit of juveniles may aid in continued dispersal of this species, especially during average winter temperatures throughout Florida where juveniles could act to preserve remnant populations until seasonal temperatures increase. © 2012 WILEY PERIODICALS, INC.

  6. Man-made objects cuing in satellite imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2009-01-01

    We present a multi-scale framework for man-made structures cuing in satellite image regions. The approach is based on a hierarchical image segmentation followed by structural analysis. A hierarchical segmentation produces an image pyramid that contains a stack of irregular image partitions, represented as polygonized pixel patches, of successively reduced levels of detail (LOOs). We are jumping off from the over-segmented image represented by polygons attributed with spectral and texture information. The image is represented as a proximity graph with vertices corresponding to the polygons and edges reflecting polygon relations. This is followed by the iterative graph contraction based on Boruvka'smore » Minimum Spanning Tree (MST) construction algorithm. The graph contractions merge the patches based on their pairwise spectral and texture differences. Concurrently with the construction of the irregular image pyramid, structural analysis is done on the agglomerated patches. Man-made object cuing is based on the analysis of shape properties of the constructed patches and their spatial relations. The presented framework can be used as pre-scanning tool for wide area monitoring to quickly guide the further analysis to regions of interest.« less

  7. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  8. Spatiotemporal requirements of the Hainan gibbon: Does home range constrain recovery of the world's rarest ape?

    PubMed

    Bryant, Jessica V; Zeng, Xingyuan; Hong, Xiaojiang; Chatterjee, Helen J; Turvey, Samuel T

    2017-03-01

    Conservation management requires an evidence-based approach, as uninformed decisions can signify the difference between species recovery and loss. The Hainan gibbon, the world's rarest ape, reportedly exploits the largest home range of any gibbon species, with these apparently large spatial requirements potentially limiting population recovery. However, previous home range assessments rarely reported survey methods, effort, or analytical approaches, hindering critical evaluation of estimate reliability. For extremely rare species where data collection is challenging, it also is unclear what impact such limitations have on estimating home range requirements. We re-evaluated Hainan gibbon spatial ecology using 75 hr of observations from 35 contact days over 93 field-days across dry (November 2010-February 2011) and wet (June 2011-September 2011) seasons. We calculated home range area for three social groups (N = 21 individuals) across the sampling period, seasonal estimates for one group (based on 24 days of observation; 12 days per season), and between-group home range overlap using multiple approaches (Minimum Convex Polygon, Kernel Density Estimation, Local Convex Hull, Brownian Bridge Movement Model), and assessed estimate reliability and representativeness using three approaches (Incremental Area Analysis, spatial concordance, and exclusion of expected holes). We estimated a yearly home range of 1-2 km 2 , with 1.49 km 2 closest to the median of all estimates. Although Hainan gibbon spatial requirements are relatively large for gibbons, our new estimates are smaller than previous estimates used to explain the species' limited recovery, suggesting that habitat availability may be less important in limiting population growth. We argue that other ecological, genetic, and/or anthropogenic factors are more likely to constrain Hainan gibbon recovery, and conservation attention should focus on elucidating and managing these factors. Re-evaluation reveals Hainan gibbon home range as c. 1-2 km 2 . Hainan gibbon home range is, therefore, similar to other Nomascus gibbons. Limited data for extremely rare species does not necessarily prevent derivation of robust home range estimates. © 2016 Wiley Periodicals, Inc.

  9. Autonomous optimal trajectory design employing convex optimization for powered descent on an asteroid

    NASA Astrophysics Data System (ADS)

    Pinson, Robin Marie

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.

  10. The Uncertain Geographic Context Problem in the Analysis of the Relationships between Obesity and the Built Environment in Guangzhou

    PubMed Central

    Zhao, Pengxiang; Zhou, Suhong

    2018-01-01

    Traditionally, static units of analysis such as administrative units are used when studying obesity. However, using these fixed contextual units ignores environmental influences experienced by individuals in areas beyond their residential neighborhood and may render the results unreliable. This problem has been articulated as the uncertain geographic context problem (UGCoP). This study investigates the UGCoP through exploring the relationships between the built environment and obesity based on individuals’ activity space. First, a survey was conducted to collect individuals’ daily activity and weight information in Guangzhou in January 2016. Then, the data were used to calculate and compare the values of several built environment variables based on seven activity space delineations, including home buffers, workplace buffers (WPB), fitness place buffers (FPB), the standard deviational ellipse at two standard deviations (SDE2), the weighted standard deviational ellipse at two standard deviations (WSDE2), the minimum convex polygon (MCP), and road network buffers (RNB). Lastly, we conducted comparative analysis and regression analysis based on different activity space measures. The results indicate that significant differences exist between variables obtained with different activity space delineations. Further, regression analyses show that the activity space delineations used in the analysis have a significant influence on the results concerning the relationships between the built environment and obesity. The study sheds light on the UGCoP in analyzing the relationships between obesity and the built environment. PMID:29439392

  11. Spatial Ecology of Blanding’s Turtles (Emydoidea blandingii) in Southcentral New Hampshire with Implications to Road Mortality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walston, Leroy J.; Najjar, Stephen J.; LaGory, Kirk E.

    Understanding the spatial ecology and habitat requirements of rare turtle species and the factors that threaten their populations is important for the success of long-term conservation programs. We present results on an eight-year field study in which we used radiotelemetry to monitor the activity and habitat use of 23 adult (male, n = 7; female, n = 16) Blanding’s turtles in southcentral New Hampshire. We found that females occupied home ranges (as defined by minimum convex polygons) that were approximately two times larger than the home ranges of males. Despite the sex difference in home range size, we found nomore » sex difference in core area size (defined as the 50% kernel density estimate). We found that activity patterns varied by season, with increased activity each month after hibernation, and peak activity coinciding with the late spring-early summer nesting season. We observed sex-based and seasonal differences in wetland use. Males appeared to prefer emergent and scrub-shrub wetlands in each season, whereas females preferred scrub-shrub wetlands in spring and ponds in summer and fall. We identified road mortality risk as a potentially important threat for this population because females crossed roads ten times more frequently than males (based on proportion of observations). The preservation of wetland networks, as well as the implementation of measures to minimize road mortality, are important considerations for the long-term persistence of this population.« less

  12. Movements of wild pigs in Louisiana and Mississippi, 2011-13

    USGS Publications Warehouse

    Hartley, Stephen B.; Goatcher, Buddy L.; Sapkota, Sijan

    2015-01-01

    The prolific breeding capability, behavioral adaptation, and adverse environmental impacts of invasive wild pigs (Sus scrofa) have increased efforts towards managing their populations and understanding their movements. Currently, little is known about wild pig populations and movements in Louisiana and Mississippi. From 2011 to 2013, the U.S. Geological Survey investigated spatial and temporal movements of wild pigs in both marsh and nonmarsh physiographic regions. Twenty-one Global Positioning System satellite telemetry tracking collars were installed on adult wild pigs captured with trained dogs and released. Coordinates of their locations were recorded hourly. We collected 16,674 hourly data points including date, time, air temperature, and position during a 3-year study. Solar and lunar attributes, such as sun and moon phases and azimuth angles, were not related significantly to the movements among wild pigs. Movements were significantly correlated negatively with air temperature. Differences in movements between seasons and years were observed. On average, movements of boars were significantly greater than those of sows. Average home range, determined by using a minimum convex polygon as a proxy, was 911 hectares for boars, whereas average home range for sows was 116 hectares. Wild pigs in marsh habitat traveled lesser distances relative to those from more arid, nonmarsh habitats. Overall, results of this study indicate that wild pigs in Louisiana and Mississippi have small home ranges. These small home ranges suggest that natural movements have not been a major factor in the recent broad-scale range expansion observed in this species in the United States.

  13. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less

  14. Quasi-conformal mapping with genetic algorithms applied to coordinate transformations

    NASA Astrophysics Data System (ADS)

    González-Matesanz, F. J.; Malpica, J. A.

    2006-11-01

    In this paper, piecewise conformal mapping for the transformation of geodetic coordinates is studied. An algorithm, which is an improved version of a previous algorithm published by Lippus [2004a. On some properties of piecewise conformal mappings. Eesti NSV Teaduste Akademmia Toimetised Füüsika-Matemaakika 53, 92-98; 2004b. Transformation of coordinates using piecewise conformal mapping. Journal of Geodesy 78 (1-2), 40] is presented; the improvement comes from using a genetic algorithm to partition the complex plane into convex polygons, whereas the original one did so manually. As a case study, the method is applied to the transformation of the Spanish datum ED50 and ETRS89, and both its advantages and disadvantages are discussed herein.

  15. 2D automatic body-fitted structured mesh generation using advancing extraction method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  16. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  17. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on themore » top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.« less

  18. An improved 2D MoF method by using high order derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2017-11-01

    The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.

  19. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  20. Long-term post-fire effects on spatial ecology and reproductive output of female Agassiz’s desert tortoises (Gopherus agassizii) at a wind energy facility near Palm Springs, California, USA

    USGS Publications Warehouse

    Lovich, Jeffrey E.; Ennen, Joshua R.; Madrak, Sheila V.; Loughran, Caleb L.; Meyer, Katherin P.; Arundel, Terence R.; Bjurlin, Curtis D.

    2011-01-01

    We studied the long-term response of a cohort of eight female Agassiz’s desert tortoises (Gopherus agassizii) during the first 15 years following a large fire at a wind energy generation facility near Palm Springs, California, USA. The fire burned a significant portion of the study site in 1995. Tortoise activity areas were mapped using minimum convex polygons for a proximate post-fire interval from 1997 to 2000, and a long-term post-fire interval from 2009 to 2010. In addition, we measured the annual reproductive output of eggs each year and monitored the body condition of tortoises over time. One adult female tortoise was killed by the fire and five tortoises bore exposure scars that were not fatal. Despite predictions that tortoises would make the short-distance movements from burned to nearby unburned habitats, most activity areas and their centroids remained in burned areas for the duration of the study. The percentage of activity area burned did not differ significantly between the two monitoring periods. Annual reproductive output and measures of body condition remained statistically similar throughout the monitoring period. Despite changes in plant composition, conditions at this site appeared to be suitable for survival of tortoises following a major fire. High productivity at the site may have buffered tortoises from the adverse impacts of fire if they were not killed outright. Tortoise populations at less productive desert sites may not have adequate resources to sustain normal activity areas, reproductive output, and body conditions following fire.

  1. Ecology and behavior of the Midget Faded Rattlesnake (Crotalus oreganus concolor) in Wyoming

    USGS Publications Warehouse

    Parker, J.M.; Anderson, S.H.

    2007-01-01

    We conducted a three-year study to describe the ecology and behavior of the Midget Faded Rattlesnake, Crotalus organus concolor. We encountered 426 and telemetered 50 C. o. concolor between 2000 and 2002. We found that their primary diet was lizards (associated with rock outcrops), though they will consume small mammals and birds. They den in aggregations, although in low numbers when compared to other subspecies. Movements and activity ranges were among the largest reported for rattlesnakes. Minimum convex polygon area was 117.8 ha for males, 63.9 ha for nongravid females, and 4.8 ha for gravid females. Mean distances traveled per year were 2122.0 m for males, 1956.0 m for nongravid females, and 296.7 m for gravid and postpartum females. Following emergence from hibernation, they spent several weeks shedding, often in aggregations before migration, and migrations occurred in early summer. Most snakes made straight-line movements to and from discrete summer activity ranges where short, multidirectional movements ensued, although others made multidirectional movements throughout the active season. We observed mating behavior between 21 July and 12 August. Gravid females gave birth during the third week of August. Mean clutch size was 4.17 (range 2-7). We found that the sex ratio was skewed favoring females 1:1.24, and they were sexually dimorphic in size (males SVL = 44.1 cm; females SVL = 40.8 cm). Our data further illustrate the diversity within the large group of Western Rattlesnakes (Crotalus viridis). Copyright 2007 Society for the Study of Amphibians and Reptiles.

  2. Using GPS Data to Study Neighborhood Walkability and Physical Activity.

    PubMed

    Rundle, Andrew G; Sheehan, Daniel M; Quinn, James W; Bartley, Katherine; Eisenhower, Donna; Bader, Michael M D; Lovasi, Gina S; Neckerman, Kathryn M

    2016-03-01

    Urban form characteristics intended to support pedestrian activity, collectively referred to as neighborhood walkability, are thought to increase total physical activity. However, little is known about how neighborhood walkability influences utilization of neighborhood space by residents and their overall physical activity. Sociodemographic information and data on mobility and physical activity over 1-week periods measured by GPS loggers and accelerometers were collected from 803 residents of New York City between November 2010 and November 2011. Potentially accessible neighborhood areas were defined as land area within a 1-kilometer distance of the subject's home (radial buffer) and within a 1-kilometer journey on the street network from the home (network buffer). To define actual areas utilized by subjects, a minimum convex polygon was plotted around GPS waypoints falling within 1 kilometer of the home. A neighborhood walkability scale was calculated for each neighborhood area. Data were analyzed in 2014. Total residential neighborhood space utilized by subjects was significantly associated with street intersection density and was significantly negatively associated with residential density and subway stop density within 1 kilometer of the home. Walkability scale scores were significantly higher within utilized as compared with non-utilized neighborhood areas. Neighborhood walkability in the utilized neighborhood area was positively associated with total weekly physical activity (32% [95% CI=17%, 49%] more minutes of moderate-equivalent physical activity across the interquartile range of walkability). Neighborhood walkability is associated with neighborhood spaces utilized by residents and total weekly physical activity. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  3. Defining space use and movements of Canada lynx with global positioning system telemetry

    USGS Publications Warehouse

    Burdett, C.L.; Moen, R.A.; Niemi, G.J.; Mech, L.D.

    2007-01-01

    Space use and movements of Canada lynx (Lynx canadensis) are difficult to study with very-high-frequency radiocollars. We deployed global positioning system (GPS) collars on 11 lynx in Minnesota to study their seasonal space-use patterns. We estimated home ranges with minimum-convex-polygon and fixed-kernel methods and estimated core areas with area/probability curves. Fixed-kernel home ranges of males (range = 29-522 km2) were significantly larger than those of females (range = 5-95 km2) annually and during the denning season. Some male lynx increased movements during March, the month most influenced by breeding activity. Lynx core areas were predicted by the 60% fixed-kernel isopleth in most seasons. The mean core-area size of males (range = 6-190 km2) was significantly larger than that of females (range = 1-19 km2) annually and during denning. Most female lynx were reproductive animals with reduced movements, whereas males often ranged widely between Minnesota and Ontario. Sensitivity analyses examining the effect of location frequency on home-range size suggest that the home-range sizes of breeding females are less sensitive to sample size than those of males. Longer periods between locations decreased home-range and core-area overlap relative to the home range estimated from daily locations. GPS collars improve our understanding of space use and movements by lynx by increasing the spatial extent and temporal frequency of monitoring and allowing home ranges to be estimated over short periods that are relevant to life-history characteristics. ?? 2007 American Society of Mammalogists.

  4. Finding Out Critical Points For Real-Time Path Planning

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    1989-03-01

    Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.

  5. The nucleolus is well-posed

    NASA Astrophysics Data System (ADS)

    Fragnelli, Vito; Patrone, Fioravante; Torre, Anna

    2006-02-01

    The lexicographic order is not representable by a real-valued function, contrary to many other orders or preorders. So, standard tools and results for well-posed minimum problems cannot be used. We prove that under suitable hypotheses it is however possible to guarantee the well-posedness of a lexicographic minimum over a compact or convex set. This result allows us to prove that some game theoretical solution concepts, based on lexicographic order are well-posed: in particular, this is true for the nucleolus.

  6. Concentric Crater Fill in Utopia Planitia: Timing and Transitions Between Glacial and Periglacial Processes.

    NASA Astrophysics Data System (ADS)

    Levy, J.; Head, J.

    2008-09-01

    Concentric crater fill (CCF), lobate debris aprons (LDA), and lineated valley fill (LVF) have long been used as indicators of ground ice on Mars [1-3]. Formation models for these features range from aeolian modification [4], to rock-glacier processes [5], to debris-covered glacier processes [6-7], but are now largely constrained by the detection of material within lobate debris aprons that is 100s of meters thick, and which has dielectric properties consistent with water ice [8-9]. At ~30 cm/pixel HiRISE resolution, LVF, LDA, and CCF show complex surface textures, termed "brain coral terrain" [9], or, succinctly, "brain terrain" (BT) [10]. Polygonally patterned ground commonly is present in proximity to brain terrain, overlying it as "brain terrain-covering" polygons (BTC) [10]. Here we document spatial patterns of BT and BTC morphology present in four CCF-filled, ~10 km diameter craters in Utopia Planitia. We then evaluate formation processes for BT and BTC units. Brain Terrain (BT) Morphology At HiRISE resolution (~30 cm/pixel), concentric crater fill brain terrain displays a complex surface texture. Two distinct sub-textures are commonly present in brain terrain [9]: filled brain terrain (FBT) and hollow brain terrain (HBT) (Figure 1). Filled brain terrain (FBT) is composed of arcuate and cuspate mounds, commonly ~10-20 m wide and 10 - <100 m long. Some FBT mounds have surface grooves located near the centreline of the long axis. FBT mounds occur singly, or in linked groups. FBT mounds are commonly oriented in lineations which are concentric to the crater in which the unit is present. FBT mound lineation spacing is variable, but commonly has a wavelength of ~20 m. FBT is commonly present on undulating topography, at the top of concentric ridges (and sometimes in the concentric valleys between ridges). Hollow brain terrain (HBT) is composed of arcuate and cuspate features that are delimited by a convex-up boundary band, commonly ~4-6 m wide, surrounding a depression. HBT are of similar dimensions to FBT, but are seldom longer than ~100 m. HBT boundary bands are commonly parallel along the long axis, but may be tightly rounded or gradually tapered along the short axis. HBT features occur singly, or in linked groups. HBT features are commonly oriented in lineations which are concentric to the crater in which the unit is present. HBT lineation spacing is variable, but commonly has a wavelength of ~20 m. HBT is commonly present at the contact between BT and BTC units, particularly in topographic lows between FBT-covered ridges and surrounding isolated FBT-covered hills embayed by BTC material. Brain Terrain Covering (BTC) Polygon Morphology Polygonally patterned ground present in proximity to BT, and commonly overlying it, constitutes the brain terrain covering (BTC) unit.. BTC polygons have two distinct morphologies: high centred (HC-BTC) and low centred (LC-BTC) (Figure 1). High centred BTC polygons (HCBTC) are composed of depressed surface troughs which intersect at both near-orthogonal and near-hexagonal intersections, forming polygons that are topographically high relative to their boundaries. HC-BTC polygons are commonly ~10 m in diameter, and have little topographic relief, although most have slightly convex-up interiors, based on shadow observations. HC-BTC troughs are commonly ~2-3 m across. HC-BTC polygons are present in a unit which has a lower albedo than brain terrain, and which can be up to ~40 m thick, based on MOLA point measurements. The BTC unit is generally flat, and is bounded by gently sloping margins, as well as by steeplyscarped, scalloped margins. Low centred BTC polygons (LC-BTC) are composed of troughs with raised shoulders that intersect at near-orthogonal and near-hexagonal intersections, forming polygons with depressed centres, relative to the raised rims. LC-BTC polygons are commonly ~10 m in diameter, and have smooth, flat, depressed interiors. LC-BTC polygon shouldered troughs are commonly ~3-4 m wide. LC-BTC are found at the fringes of the BTC unit, at both low-angle and scalloped margins. Spatial Relationships Between Units FBT surfaces are much more common than HBT surfaces, and HC-BTC surfaces are much more common than LCBTC surfaces. Contacts between FBT and HBT are gradational, consisting of FBT mounds which are partially hollow, or which transition into HBT-like boundary bands. Contacts between HC-BTC and LC-BTC are gradational on gentle slopes, and abrupt on steeply scalloped slopes. BTC surfaces are commonly found at the foot of crater wall interior slopes, and in topographic lows between BTsurfaced concentric ridges. BTC material is commonly draped on, and inter-fingered between, FBT mounds and HBT boundary walls at contacts between the two units, suggesting that BTC units superpose, and in places, embay BT units. FBT-covered hill surfaces are commonly ringed by HBT, which is in turn ringed by LC-BTC, and/or HCBTC. FBT-covered concentric ridges are commonly flanked by HBT in the lows between ridges, particularly in lows which also have exposures of LC- or HC-BTC polygons. Discussion Crater counts on BT material indicate an age of ~100 MY, consistent with counts on LDA [11]; crater counts on BTC units indicate an age of ~1 MY. This age difference suggests that BT and BTC are stratigraphically distinct units that were deposited at markedly different times. The small exposures of HBT and LC-BTC make distinguishing ages for these textures from ages of the more common FBT and HC-BTC surfaces impossible. However, the gradational contacts between each sub-texture, on both steep and gentle slopes, suggests that modification of two distinct units, rather than exposure of four radically different layers, accounts for the differences between sub-textures. On the basis of these observations, we propose the following formation sequence for BT and BTC units. BTC units are an atmospherically-emplaced, ice-rich deposit, temporally associated with recent latitude-dependent mantling events [12-14] containing sufficient dusty material to generate a surficial lag deposit during sublimation of near-surface ice [e.g., 15]. Thermal contraction cracking generates polygonal fractures, which initially enhance sublimation at polygon margins, generated HC-BTC polygons [15]. The lack of strongly lineated BTC polygons suggests that BTC deposits have not significantly flowed on ~1 MY timescales [e.g., 4, 16]. Infilling of polygonal fractures by overlying lag deposit fines generates subsurface wedges, which concentrate nonicy material in polygon troughs (insulating underlying icerich material), and which gradually results in the generation of raised shoulders along polygon troughs due to thermal expansion—a process analogous to sand-wedge formation in terrestrial polygons [17]. As sublimation continues, concentration of ice-free material at trough boundaries results in relatively greater sublimation at polygon interiors, generating lowered polygon interiors, and relatively raised polygon margins (LC-BTC polygons)—a process analogous to the formation of "fortress polygons" [18], but resulting from solid-vapour transitions, rather than solid-liquid transitions. Ice may still be preserved beneath thickened lags at LC-BTC polygon boundaries. A comparable process can be invoked to account for the formation of CCF brain terrain (BT). Based on MOLA point topography measurements and typical martian crater depthdiameter ratios [19], the analyzed CCF-filled craters are up to 80% filled, accounting for volumes up to 800 m thick, which may be ice-rich [8-9]. Thick accumulations of icerich material could readily flow on ~100 MY timescales under current martian conditions, sufficient to produce observed strains (e.g., ~60%, observed in one deformed crater). Internal glacio-tectonic stresses, coupled with surface thermal contraction stresses would fracture the developing BT surface, resulting in the generation of oriented, lineated fracture networks, analogous to those observed on flowing debris-covered glaciers on Earth [20]. Inversion of polygon topography at polygon margins due to differential sublimation would result in the formation of raised mounds and chains of mounds, some of which may preserve the original surface trough (e.g., the axial furrows observed in some BT mound chains). Deposition of thin layers of low-albedo BTC material on ice-cored FBT mounds could result in enhanced sublimation of residual icecores [21], leading to collapse of FBT mounds, and generating HBT features at contacts between BT and BTC units. References. [1] Squyres, S. (1979) JGR, 84, 8087-8096. [2] Squyres, S. & Carr, M. (1986) Sci., 231, 249-252. [3] Lucchita, B. (1984) JGR, 89, 409-418. [4] Zimbelman, J. et al. (1989) LPSC19, 397-407. [5] Mangold, N. & Allemand, P. (2001) GRL, 28, 3407-3410. [6] Head, J. et al. (2006) GRL, doi: 10.1029/2005GL024360. [7] Levy, J. et al. (2007) JGR, doi: 10.1029/ 2006JE002852. [8] Holt, J. et al. (2008) LPSC39, #2441. [9] Plaut, J. et al. (2008) LPSC39, #1391. [9] Dobrea, N. et al. (2007) 7th Mars, #3358. [10] Levy, J. et al. (2008) LPSC39, #1171, [11] Mangold, N. (2003) JGR, doi:10.1029/2002JE001885. [12] Mustard, J. et al. (2001) Nature, 412, 411-414. [13] Head, J. et al. (2003) Nature, 426, 797-802. [14] Kreslavsky, M. & Head, J. (2006) M&PS, 41, 1633-1646. [15] Marchant, D. et al. (2002), GSAB, 114, 718-730. [16] Milliken, R. et al. (2003) JGR, doi: 10.1029/2002JE002005. [17] Pewe, T. (1959) Am. J. Sci., 257, 545-552. [18] Root, J. (1975) Geol. Surv. Can., 75-1B, 181. [19] Garvin, J. et al. (2002) LPSC33, #1255. [20] Levy, J. et al. (2006) Ant. Sci., doi: 10.1017/ S0954102006000435. [21] Williams, K. et al. (2008) Icarus, In Press.

  7. Use of multi-sensor active fire detections to map fires in the United States: the future of monitoring trends in burn severity

    USGS Publications Warehouse

    Picotte, Joshua J.; Coan, Michael; Howard, Stephen M.

    2014-01-01

    The effort to utilize satellite-based MODIS, AVHRR, and GOES fire detections from the Hazard Monitoring System (HMS) to identify undocumented fires in Florida and improve the Monitoring Trends in Burn Severity (MTBS) mapping process has yielded promising results. This method was augmented using regression tree models to identify burned/not-burned pixels (BnB) in every Landsat scene (1984–2012) in Worldwide Referencing System 2 Path/Rows 16/40, 17/39, and 1839. The burned area delineations were combined with the HMS detections to create burned area polygons attributed with their date of fire detection. Within our study area, we processed 88,000 HMS points (2003–2012) and 1,800 Landsat scenes to identify approximately 300,000 burned area polygons. Six percent of these burned area polygons were larger than the 500-acre MTBS minimum size threshold. From this study, we conclude that the process can significantly improve understanding of fire occurrence and improve the efficiency and timeliness of assessing its impacts upon the landscape.

  8. Joint terminals and relay optimization for two-way power line information exchange systems with QoS constraints

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolin; Rong, Yue

    2015-12-01

    The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.

  9. A Real-Time Reaction Obstacle Avoidance Algorithm for Autonomous Underwater Vehicles in Unknown Environments

    PubMed Central

    Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi

    2018-01-01

    A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle’s irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal. PMID:29393915

  10. A Real-Time Reaction Obstacle Avoidance Algorithm for Autonomous Underwater Vehicles in Unknown Environments.

    PubMed

    Yan, Zheping; Li, Jiyun; Zhang, Gengshi; Wu, Yi

    2018-02-02

    A novel real-time reaction obstacle avoidance algorithm (RRA) is proposed for autonomous underwater vehicles (AUVs) that must adapt to unknown complex terrains, based on forward looking sonar (FLS). To accomplish this algorithm, obstacle avoidance rules are planned, and the RRA processes are split into five steps Introduction only lists 4 so AUVs can rapidly respond to various environment obstacles. The largest polar angle algorithm (LPAA) is designed to change detected obstacle's irregular outline into a convex polygon, which simplifies the obstacle avoidance process. A solution is designed to solve the trapping problem existing in U-shape obstacle avoidance by an outline memory algorithm. Finally, simulations in three unknown obstacle scenes are carried out to demonstrate the performance of this algorithm, where the obtained obstacle avoidance trajectories are safety, smooth and near-optimal.

  11. Parallel Geospatial Data Management for Multi-Scale Environmental Data Analysis on GPUs

    NASA Astrophysics Data System (ADS)

    Wang, D.; Zhang, J.; Wei, Y.

    2013-12-01

    As the spatial and temporal resolutions of Earth observatory data and Earth system simulation outputs are getting higher, in-situ and/or post- processing such large amount of geospatial data increasingly becomes a bottleneck in scientific inquires of Earth systems and their human impacts. Existing geospatial techniques that are based on outdated computing models (e.g., serial algorithms and disk-resident systems), as have been implemented in many commercial and open source packages, are incapable of processing large-scale geospatial data and achieve desired level of performance. In this study, we have developed a set of parallel data structures and algorithms that are capable of utilizing massively data parallel computing power available on commodity Graphics Processing Units (GPUs) for a popular geospatial technique called Zonal Statistics. Given two input datasets with one representing measurements (e.g., temperature or precipitation) and the other one represent polygonal zones (e.g., ecological or administrative zones), Zonal Statistics computes major statistics (or complete distribution histograms) of the measurements in all regions. Our technique has four steps and each step can be mapped to GPU hardware by identifying its inherent data parallelisms. First, a raster is divided into blocks and per-block histograms are derived. Second, the Minimum Bounding Boxes (MBRs) of polygons are computed and are spatially matched with raster blocks; matched polygon-block pairs are tested and blocks that are either inside or intersect with polygons are identified. Third, per-block histograms are aggregated to polygons for blocks that are completely within polygons. Finally, for blocks that intersect with polygon boundaries, all the raster cells within the blocks are examined using point-in-polygon-test and cells that are within polygons are used to update corresponding histograms. As the task becomes I/O bound after applying spatial indexing and GPU hardware acceleration, we have developed a GPU-based data compression technique by reusing our previous work on Bitplane Quadtree (or BPQ-Tree) based indexing of binary bitmaps. Results have shown that our GPU-based parallel Zonal Statistic technique on 3000+ US counties over 20+ billion NASA SRTM 30 meter resolution Digital Elevation (DEM) raster cells has achieved impressive end-to-end runtimes: 101 seconds and 46 seconds a low-end workstation equipped with a Nvidia GTX Titan GPU using cold and hot cache, respectively; and, 60-70 seconds using a single OLCF TITAN computing node and 10-15 seconds using 8 nodes. Our experiment results clearly show the potentials of using high-end computing facilities for large-scale geospatial processing.

  12. Size-Constrained Region Merging: A New Tool to Derive Basic Landcover Units from Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Castilla, G.

    2004-09-01

    Landcover maps typically represent the territory as a mosaic of contiguous units "polygons- that are assumed to correspond to geographic entities" like e.g. lakes, forests or villages-. They may also be viewed as representing a particular level of a landscape hierarchy where each polygon is a holon - an object made of subobjects and part of a superobject. The focal level portrayed in the map is distinguished from other levels by the average size of objects compounding it. Moreover, the focal level is bounded by the minimum size that objects of this level are supposed to have. Based on this framework, we have developed a segmentation method that defines a partition on a multiband image such that i) the mean size of segments is close to the one specified; ii) each segment exceeds the required minimum size; and iii) the internal homogeneity of segments is maximal given the size constraints. This paper briefly describes the method, focusing on its region merging stage. The most distinctive feature of the latter is that while the merging sequence is ordered by increasing dissimilarity as in conventional methods, there is no need to define a threshold on the dissimilarity measure between adjacent segments.

  13. Investigations into the tensile failure of doubly-convex cylindrical tablets under diametral loading using finite element methodology.

    PubMed

    Podczeck, Fridrun; Drake, Kevin R; Newton, J Michael

    2013-09-15

    In the literature various solutions exist for the calculation of the diametral compression tensile strength of doubly-convex tablets and each approach is based on experimental data obtained from single materials (gypsum, microcrystalline cellulose) only. The solutions are represented by complex equations and further differ for elastic and elasto-plastic behaviour of the compacts. The aim of this work was to develop a general equation that is applicable independently of deformation behaviour and which is based on simple tablet dimensions such as diameter and total tablet thickness only. With the help of 3D-FEM analysis the tensile failure stress of doubly-convex tables with central cylinder to total tablet thickness ratios W/D between 0.06 and 0.50 and face-curvature ratios D/R between 0.25 and 1.85 were evaluated. Both elastic and elasto-plastic deformation behaviour were considered. The results of 80 individual simulations were combined and showed that the tensile failure stress σt of doubly-convex tablets can be calculated from σt=(2P/πDW)(W/T)=2P/πDT with P being the failure load, D the diameter, W the central cylinder thickness, and T the total thickness of the tablet. This equation converts into the standard Brazilian equation (σt=2P/πDW) when W equals T, i.e. is equally valid for flat cylindrical tablets. In practice, the use of this new equation removes the need for complex measurements of tablet dimensions, because it only requires values for diameter and total tablet thickness. It also allows setting of standards for the mechanical strength of doubly-convex tablets. The new equation holds both for elastic and elasto-plastic deformation behaviour of the tablets under load. It is valid for all combinations of W/D-ratios between 0.06 and 0.50 with D/R-ratios between 0.00 and 1.85 except for W/D=0.50 in combination with D/R-ratios of 1.85 and 1.43 and for W/D-ratios of 0.40 and 0.30 in combination with D/R=1.85. FEM-analysis indicated a tendency to failure by capping or even more complex failure patterns in these exceptional cases. The FEM-results further indicated that in general W/D-ratios between 0.15 and 0.20 are favourable when the overall size and shape of the tablets is modified to give maximum tablet tensile strength. However, the maximum tensile stress of doubly-convex tablets will never exceed that of a flat-face cylindrical tablet of similar W/D-ratio. The lowest tensile stress depends on the W/D-ratio. For the thinnest central cylinder thickness, this minimum stress occurs at D/R=0.50; for W/D-ratios between 0.10 and 0.20 the D/R-ratio for the minimum tensile stress increases to 0.67, and for all other central cylinder thicknesses the minimum tensile stress is found at D/R=1.00. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Analytical design of a hyper-spectral imaging spectrometer utilizing a convex grating

    NASA Astrophysics Data System (ADS)

    Kim, Seo H.; Kong, Hong J.; Ku, Hana; Lee, Jun H.

    2012-09-01

    This paper describes about the new design method for hyper-spectral Imaging spectrometers utilizing convex grating. Hyper-spectral imaging systems are power tools in the field of remote sensing. HSI systems collect at least 100 spectral bands of 10~20 nm width. Because the spectral signature is different and induced unique for each material, it should be possible to discriminate between one material and another based on difference in spectral signature of material. I mathematically analyzed parameters for the intellectual initial design. Main concept of this is the derivative of "ring of minimum aberration without vignetting". This work is a kind of analytical design of an Offner imaging spectrometer. Also, several experiment methods will be contrived to evaluate the performance of imaging spectrometer.

  15. Migration, foraging, and residency patterns for Northern Gulf loggerheads: implications of local threats and international movements

    USGS Publications Warehouse

    Hart, Kristen M.; Lamont, Margaret M.; Sartain-Iverson, Autumn R.; Fujisaki, Ikuko

    2014-01-01

    Northern Gulf of Mexico (NGoM) loggerheads (Caretta caretta) make up one of the smallest subpopulations of this threatened species and have declining nest numbers. We used satellite telemetry and a switching state-space model to identify distinct foraging areas used by 59 NGoM loggerheads tagged during 2010–2013. We tagged turtles after nesting at three sites, 1 in Alabama (Gulf Shores; n = 37) and 2 in Florida (St. Joseph Peninsula; n = 20 and Eglin Air Force Base; n = 2). Peak migration time was 22 July to 9 August during which >40% of turtles were in migration mode; the mean post-nesting migration period was 23.0 d (±13.8 d SD). After displacement from nesting beaches, 44 turtles traveled to foraging sites where they remained resident throughout tracking durations. Selected foraging locations were variable distances from tagging sites, and in 5 geographic regions; no turtles selected foraging sites outside the Gulf of Mexico (GoM). Foraging sites delineated using 50% kernel density estimation were located a mean distance of 47.6 km from land and in water with mean depth of −32.5 m; other foraging sites, delineated using minimum convex polygons, were located a mean distance of 43.0 km from land and in water with a mean depth of −24.9 m. Foraging sites overlapped with known trawling activities, oil and gas extraction activities, and the footprint of surface oiling during the 2010 Deepwater Horizon oil spill (n = 10). Our results highlight the year-round use of habitats in the GoM by loggerheads that nest in the NGoM. Our findings indicate that protection of females in this subpopulation requires both international collaborations and management of threats that spatially overlap with distinct foraging habitats.

  16. Movements and Habitat-Use of Loggerhead Sea Turtles in the Northern Gulf of Mexico during the Reproductive Period

    PubMed Central

    Hart, Kristen M.; Lamont, Margaret M.; Sartain, Autumn R.; Fujisaki, Ikuko; Stephens, Brail S.

    2013-01-01

    Nesting strategies and use of important in-water habitats for far-ranging marine turtles can be determined using satellite telemetry. Because of a lack of information on habitat-use by marine turtles in the northern Gulf of Mexico, we used satellite transmitters in 2010 through 2012 to track movements of 39 adult female breeding loggerhead turtles (Caretta caretta) tagged on nesting beaches at three sites in Florida and Alabama. During the nesting season, recaptured turtles emerged to nest 1 to 5 times, with mean distance between emergences of 27.5 km; however, several turtles nested on beaches separated by ∼250 km within a single season. Mean total distances traveled throughout inter-nesting periods for all turtles was 1422.0±930.8 km. In-water inter-nesting sites, delineated using 50% kernel density estimation (KDE), were located a mean distance of 33.0 km from land, in water with mean depth of −31.6 m; other in-water inter-nesting sites, delineated using minimum convex polygon (MCP) approach, were located a mean 13.8 km from land and in water with a mean depth of −15.8 m. Mean size of in-water inter-nesting habitats were 61.9 km2 (50% KDEs, n = 10) and 741.4 km2 (MCPs, n = 30); these areas overlapped significantly with trawling and oil and gas extraction activities. Abundance estimates for this nesting subpopulation may be inaccurate in light of how much spread there is between nests of the same individual. Further, our results also have consequences for critical habitat designations for northern Gulf loggerheads, as protection of one nesting beach would not encompass the entire range used by turtles during breeding seasons. PMID:23843971

  17. Seasonal movement, residency, and migratory patterns of Wilson's Snipe (Gallinago delicata)

    USGS Publications Warehouse

    Cline, Brittany B.; Haig, Susan M.

    2011-01-01

    Cross-seasonal studies of avian movement establish links between geographically distinct wintering, breeding, and migratory stopover locations, or assess site fidelity and movement between distinct phases of the annual cycle. Far fewer studies have investigated individual movement patterns within and among seasons over an annual cycle. Within western Oregon's Willamette Valley throughout 2007, we quantified intra- and interseasonal movement patterns, fidelity (regional and local), and migratory patterns of 37 radiomarked Wilson's Snipe (Gallinago delicata) to elucidate residency in a region of breeding- and wintering-range overlap. Telemetry revealed complex regional population structure, including winter residents (74%), winter transients (14%), summer residents (9%), and one year-round resident breeder (3%). Results indicated a lack of connectivity between winter and summer capture populations, some evidence of partial migration, and between-season fidelity to the region (winter-resident return; subsequent fall). Across seasons, the extent of movements and use of multiple wetland sites suggested that Wilson's Snipe were capable of exploratory movements but more regularly perceived local and fine-scale segments of the landscape as connected. Movements differed significantly by season and residency; individuals exhibited contracted movements during late winter and more expansive movements during precipitation-limited periods (late spring, summer, fall). Mean home-range size was 3.5 ± 0.93 km2 (100% minimum convex polygon [MCP]) and 1.6 ± 0.42 km2 (95% fixed kernel) and did not vary by sex; however, home range varied markedly by season (range of 100% MCPs: 1.04–7.56 km2). The results highlight the need to consider seasonal and interspecific differences in shorebird life histories and space-use requirements when developing regional wetland conservation plans.

  18. Movements and habitat-use of loggerhead sea turtles in the northern Gulf of Mexico during the reproductive period

    USGS Publications Warehouse

    Hart, Kristen M.; Lamont, Margaret M.; Sartain-Iverson, Autumn R.; Fujisaki, Ikuko; Stephens, Brail S.

    2013-01-01

    Nesting strategies and use of important in-water habitats for far-ranging marine turtles can be determined using satellite telemetry. Because of a lack of information on habitat-use by marine turtles in the northern Gulf of Mexico, we used satellite transmitters in 2010 through 2012 to track movements of 39 adult female breeding loggerhead turtles (Caretta caretta) tagged on nesting beaches at three sites in Florida and Alabama. During the nesting season, recaptured turtles emerged to nest 1 to 5 times, with mean distance between emergences of 27.5 km; however, several turtles nested on beaches separated by ~250 km within a single season. Mean total distances traveled throughout inter-nesting periods for all turtles was 1422.0±930.8 km. In-water inter-nesting sites, delineated using 50% kernel density estimation (KDE), were located a mean distance of 33.0 km from land, in water with mean depth of −31.6 m; other in-water inter-nesting sites, delineated using minimum convex polygon (MCP) approach, were located a mean 13.8 km from land and in water with a mean depth of −15.8 m. Mean size of in-water inter-nesting habitats were 61.9 km2 (50% KDEs, n = 10) and 741.4 km2 (MCPs, n = 30); these areas overlapped significantly with trawling and oil and gas extraction activities. Abundance estimates for this nesting subpopulation may be inaccurate in light of how much spread there is between nests of the same individual. Further, our results also have consequences for critical habitat designations for northern Gulf loggerheads, as protection of one nesting beach would not encompass the entire range used by turtles during breeding seasons.

  19. Satellite tracking reveals habitat use by juvenile green sea turtles Chelonia mydas in the Everglades, Florida, USA

    USGS Publications Warehouse

    Hart, Kristen M.; Fujisaki, Ikuko

    2010-01-01

    We tracked the movements of 6 juvenile green sea turtles captured in coastal areas of southwest Florida within Everglades National Park (ENP) using satellite transmitters for periods of 27 to 62 d in 2007 and 2008 (mean ± SD: 47.7 ± 12.9 d). Turtles ranged in size from 33.4 to 67.5 cm straight carapace length (45.7 ± 12.9 cm) and 4.4 to 40.8 kg in mass (16.0 ± 13.8 kg). These data represent the first satellite tracking data gathered on juveniles of this endangered species at this remote study site, which may represent an important developmental habitat and foraging ground. Satellite tracking results suggested that these immature turtles were resident for several months very close to capture and release sites, in waters from 0 to 10 m in depth. Mean home range for this springtime tracking period as represented by minimum convex polygon (MCP) was 1004.9 ± 618.8 km2 (range 374.1 to 2060.1 km2), with 4 of 6 individuals spending a significant proportion of time within the ENP boundaries in 2008 in areas with dense patches of marine algae. Core use areas determined by 50% kernel density estimates (KDE) ranged from 5.0 to 54.4 km2, with a mean of 22.5 ± 22.1 km2. Overlap of 50% KDE plots for 6 turtles confirmed use of shallow-water nearshore habitats =0.6 m deep within the park boundary. Delineating specific habitats used by juvenile green turtles in this and other remote coastal areas with protected status will help conservation managers to prioritize their efforts and increase efficacy in protecting endangered species.

  20. Migration, foraging, and residency patterns for Northern Gulf loggerheads: implications of local threats and international movements.

    PubMed

    Hart, Kristen M; Lamont, Margaret M; Sartain, Autumn R; Fujisaki, Ikuko

    2014-01-01

    Northern Gulf of Mexico (NGoM) loggerheads (Caretta caretta) make up one of the smallest subpopulations of this threatened species and have declining nest numbers. We used satellite telemetry and a switching state-space model to identify distinct foraging areas used by 59 NGoM loggerheads tagged during 2010-2013. We tagged turtles after nesting at three sites, 1 in Alabama (Gulf Shores; n = 37) and 2 in Florida (St. Joseph Peninsula; n = 20 and Eglin Air Force Base; n = 2). Peak migration time was 22 July to 9 August during which >40% of turtles were in migration mode; the mean post-nesting migration period was 23.0 d (±13.8 d SD). After displacement from nesting beaches, 44 turtles traveled to foraging sites where they remained resident throughout tracking durations. Selected foraging locations were variable distances from tagging sites, and in 5 geographic regions; no turtles selected foraging sites outside the Gulf of Mexico (GoM). Foraging sites delineated using 50% kernel density estimation were located a mean distance of 47.6 km from land and in water with mean depth of -32.5 m; other foraging sites, delineated using minimum convex polygons, were located a mean distance of 43.0 km from land and in water with a mean depth of -24.9 m. Foraging sites overlapped with known trawling activities, oil and gas extraction activities, and the footprint of surface oiling during the 2010 Deepwater Horizon oil spill (n = 10). Our results highlight the year-round use of habitats in the GoM by loggerheads that nest in the NGoM. Our findings indicate that protection of females in this subpopulation requires both international collaborations and management of threats that spatially overlap with distinct foraging habitats.

  1. Movements and habitat-use of loggerhead sea turtles in the northern Gulf of Mexico during the reproductive period.

    PubMed

    Hart, Kristen M; Lamont, Margaret M; Sartain, Autumn R; Fujisaki, Ikuko; Stephens, Brail S

    2013-01-01

    Nesting strategies and use of important in-water habitats for far-ranging marine turtles can be determined using satellite telemetry. Because of a lack of information on habitat-use by marine turtles in the northern Gulf of Mexico, we used satellite transmitters in 2010 through 2012 to track movements of 39 adult female breeding loggerhead turtles (Caretta caretta) tagged on nesting beaches at three sites in Florida and Alabama. During the nesting season, recaptured turtles emerged to nest 1 to 5 times, with mean distance between emergences of 27.5 km; however, several turtles nested on beaches separated by ~250 km within a single season. Mean total distances traveled throughout inter-nesting periods for all turtles was 1422.0 ± 930.8 km. In-water inter-nesting sites, delineated using 50% kernel density estimation (KDE), were located a mean distance of 33.0 km from land, in water with mean depth of -31.6 m; other in-water inter-nesting sites, delineated using minimum convex polygon (MCP) approach, were located a mean 13.8 km from land and in water with a mean depth of -15.8 m. Mean size of in-water inter-nesting habitats were 61.9 km(2) (50% KDEs, n = 10) and 741.4 km(2) (MCPs, n = 30); these areas overlapped significantly with trawling and oil and gas extraction activities. Abundance estimates for this nesting subpopulation may be inaccurate in light of how much spread there is between nests of the same individual. Further, our results also have consequences for critical habitat designations for northern Gulf loggerheads, as protection of one nesting beach would not encompass the entire range used by turtles during breeding seasons.

  2. Inter-nesting movements and habitat-use of adult female Kemp’s ridley turtles in the Gulf of Mexico

    USGS Publications Warehouse

    Shaver, Donna J.; Hart, Kristen M.; Fujisaki, Ikuko; Bucklin, David N.; Iverson, Autumn; Rubio, Cynthia; Backof, Thomas F.; Burchfield, Patrick M.; Gonzales Diaz Miron, Raul de Jesus; Dutton, Peter H.; Frey, Amy; Peña, Jaime; Gamez, Daniel Gomez; Martinez, Hector J.; Ortiz, Jaime

    2017-01-01

    Species vulnerability is increased when individuals congregate in restricted areas for breeding; yet, breeding habitats are not well defined for many marine species. Identification and quantification of these breeding habitats are essential to effective conservation. Satellite telemetry and switching state-space modeling (SSM) were used to define inter-nesting habitat of endangered Kemp’s ridley turtles (Lepidochelys kempii) in the Gulf of Mexico. Turtles were outfitted with satellite transmitters after nesting at Padre Island National Seashore, Texas, USA, from 1998 through 2013 (n = 60); Rancho Nuevo, Tamaulipas, Mexico, during 2010 and 2011 (n = 11); and Tecolutla, Veracruz, Mexico, during 2012 and 2013 (n = 11). These sites span the range of nearly all nesting by this species. Inter-nesting habitat lies in a narrow band of nearshore western Gulf of Mexico waters in the USA and Mexico, with mean water depth of 14 to 19 m within a mean distance to shore of 6 to 11 km as estimated by 50% kernel density estimate, α-Hull, and minimum convex polygon methodologies. Turtles tracked during the inter-nesting period moved, on average, 17.5 km/day and a mean total distance of 398 km. Mean home ranges occupied were 725 to 2948 km2. Our results indicate that these nearshore western Gulf waters represent critical inter-nesting habitat for this species, where threats such as shrimp trawling and oil and gas platforms also occur. Up to half of all adult female Kemp’s ridleys occupy this habitat for weeks to months during each nesting season. Because inter-nesting habitat for this species is concentrated in nearshore waters of the western Gulf of Mexico in both Mexico and the USA, international collaboration is needed to protect this essential habitat and the turtles occurring within it.

  3. Movement behavior, dispersal, and the potential for localized management of deer in a suburban environment

    USGS Publications Warehouse

    Porter, W.F.; Underwood, H.B.; Woodard, J.L.

    2004-01-01

    We examined the potential for localized management of white-tailed deer (Odocoileus virginianus) to be successful by measuring movements, testing site fidelity, and modeling the effects of dispersal. Fifty-nine females were radiomarked and tracked during 1997 through 2000 in Irondequoit, New York, USA, a suburb of Rochester. We constructed home ranges for those deer with A greater than or equal to 18 reclocations/season. Fifty percent minimum convex polygons (MCP) averaged 3.9 (SE = 0.53) ha in the summer and 5.3 (SE = 0.80) ha in the winter. Deer showed strong fidelity to both summer and winter home ranges, and 30 of 31 females showed overlap of summer and winter home ranges. Annual survival was 64%; the major cause of mortality was deer-automobile collisions. Average annual dispersal rates were <15% for yearlings and adults. Using matrix population modeling, we explored the role of female dispersal in sustaining different management objectives in adjacent locales of approximately 1,000 ha. Modeling showed that if female dispersal was 8%, culling would have to reduce annual survival to 58% to maintain a population just under ecological carrying capacity and reduce survival to 42% to keel) the population at one-half carrying capacity. With the same dispersal, contraception Would need to be effective in 32% of females if the population is near carrying capacity and 68% if the population is at one-half of carrying capacity. Movement behavior data and modeling results lend support to the use of a localized approach to management of females that emphasizes neighborhood-scale manipulation of deer populations, but our research suggests that dispersal rates in females could be critical to long-term success.

  4. Preliminary data used to assess the accuracy of estimating female white-tailed deer diel birthing-season home ranges using only daytime locations

    USGS Publications Warehouse

    Barber-Meyer, Shannon M.; Mech, L. David

    2014-01-01

    Because many white-tailed deer (Odocoileus virginianus) home-range and habitat-use studies rely only on daytime radio-tracking data, we were interested in whether diurnal data sufficiently represented diel home ranges. We analyzed home-range and core-use size and overlap of 8 adult-female Global-Positioning-System-collared deer during May and June 2001 and 2002 in the Superior National Forest, Minnesota, USA. We used 2 traditional means of analysis: minimum-convex polygons (MCP) and fixed kernels (95% FK, home range and 50% FK, core use) and two methods to partition day and night location data: (1) daytime = 0800-2000 h versus nighttime = 2000-0800 h and (2) sunup versus sundown. We found no statistical difference in size of home-range and core-use areas across day and night comparisons; however, in terms of spatial overlap, approximately 30% of night-range areas on average were not accounted for using daytime locations, with even greater differences between core-use areas (on average approximately 50%). We conclude that diurnal data do not adequately describe diel adult-female-deer, May-June home-ranges due to differences in spatial overlap (location). We suggest research to determine (1) if our findings hold in other circumstances (e.g., exclusive of the parturition period, other age classes, etc.), (2) if our conclusions generalize under other conditions (e.g., across deer range, varying seasons, etc.), (3) if habitat-use conclusions are affected by the incomplete overlap between diurnal and diel data, (4) how many nocturnal locations must be included to generate sufficient overlap, and (5) the influence of using other kernel sizes (e.g., 75%, 90%).

  5. Inter-nesting movements and habitat-use of adult female Kemp's ridley turtles in the Gulf of Mexico.

    PubMed

    Shaver, Donna J; Hart, Kristen M; Fujisaki, Ikuko; Bucklin, David; Iverson, Autumn R; Rubio, Cynthia; Backof, Thomas F; Burchfield, Patrick M; de Jesus Gonzales Diaz Miron, Raul; Dutton, Peter H; Frey, Amy; Peña, Jaime; Gomez Gamez, Daniel; Martinez, Hector J; Ortiz, Jaime

    2017-01-01

    Species vulnerability is increased when individuals congregate in restricted areas for breeding; yet, breeding habitats are not well defined for many marine species. Identification and quantification of these breeding habitats are essential to effective conservation. Satellite telemetry and switching state-space modeling (SSM) were used to define inter-nesting habitat of endangered Kemp's ridley turtles (Lepidochelys kempii) in the Gulf of Mexico. Turtles were outfitted with satellite transmitters after nesting at Padre Island National Seashore, Texas, USA, from 1998 through 2013 (n = 60); Rancho Nuevo, Tamaulipas, Mexico, during 2010 and 2011 (n = 11); and Tecolutla, Veracruz, Mexico, during 2012 and 2013 (n = 11). These sites span the range of nearly all nesting by this species. Inter-nesting habitat lies in a narrow band of nearshore western Gulf of Mexico waters in the USA and Mexico, with mean water depth of 14 to 19 m within a mean distance to shore of 6 to 11 km as estimated by 50% kernel density estimate, α-Hull, and minimum convex polygon methodologies. Turtles tracked during the inter-nesting period moved, on average, 17.5 km/day and a mean total distance of 398 km. Mean home ranges occupied were 725 to 2948 km2. Our results indicate that these nearshore western Gulf waters represent critical inter-nesting habitat for this species, where threats such as shrimp trawling and oil and gas platforms also occur. Up to half of all adult female Kemp's ridleys occupy this habitat for weeks to months during each nesting season. Because inter-nesting habitat for this species is concentrated in nearshore waters of the western Gulf of Mexico in both Mexico and the USA, international collaboration is needed to protect this essential habitat and the turtles occurring within it.

  6. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    PubMed Central

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  7. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    PubMed

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  8. Pareto-front shape in multiobservable quantum control

    NASA Astrophysics Data System (ADS)

    Sun, Qiuyang; Wu, Re-Bing; Rabitz, Herschel

    2017-03-01

    Many scenarios in the sciences and engineering require simultaneous optimization of multiple objective functions, which are usually conflicting or competing. In such problems the Pareto front, where none of the individual objectives can be further improved without degrading some others, shows the tradeoff relations between the competing objectives. This paper analyzes the Pareto-front shape for the problem of quantum multiobservable control, i.e., optimizing the expectation values of multiple observables in the same quantum system. Analytic and numerical results demonstrate that with two commuting observables the Pareto front is a convex polygon consisting of flat segments only, while with noncommuting observables the Pareto front includes convexly curved segments. We also assess the capability of a weighted-sum method to continuously capture the points along the Pareto front. Illustrative examples with realistic physical conditions are presented, including NMR control experiments on a 1H-13C two-spin system with two commuting or noncommuting observables.

  9. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    NASA Astrophysics Data System (ADS)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  10. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  11. Reentry trajectory optimization with waypoint and no-fly zone constraints using multiphase convex programming

    NASA Astrophysics Data System (ADS)

    Zhao, Dang-Jun; Song, Zheng-Yu

    2017-08-01

    This study proposes a multiphase convex programming approach for rapid reentry trajectory generation that satisfies path, waypoint and no-fly zone (NFZ) constraints on Common Aerial Vehicles (CAVs). Because the time when the vehicle reaches the waypoint is unknown, the trajectory of the vehicle is divided into several phases according to the prescribed waypoints, rendering a multiphase optimization problem with free final time. Due to the requirement of rapidity, the minimum flight time of each phase index is preferred over other indices in this research. The sequential linearization is used to approximate the nonlinear dynamics of the vehicle as well as the nonlinear concave path constraints on the heat rate, dynamic pressure, and normal load; meanwhile, the convexification techniques are proposed to relax the concave constraints on control variables. Next, the original multiphase optimization problem is reformulated as a standard second-order convex programming problem. Theoretical analysis is conducted to show that the original problem and the converted problem have the same solution. Numerical results are presented to demonstrate that the proposed approach is efficient and effective.

  12. Spatial ecology and behavior of eastern box turtles on the hardwood ecosystem experiment: pre-treatment results

    Treesearch

    Andrea F. Currylow; Brian J. MacGowan; Rod N. Williams

    2013-01-01

    To understand better how eastern box turtles (Terrapene carolina carolina) are affected by forest management practices, we monitored movements of box turtles prior to silvicultural treatments within the Hardwood Ecosystem Experiment (HEE) in Indiana. During 2007 and 2008, we tracked 23-28 turtles on six units of the HEE. Estimated minimum convex...

  13. Combined-probability space and certainty or uncertainty relations for a finite-level quantum system

    NASA Astrophysics Data System (ADS)

    Sehrawat, Arun

    2017-08-01

    The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.

  14. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  15. Longitudinal aerodynamic performance of a series of power-law and minimum wave drag bodies at Mach 6 and several Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Ashby, G. C., Jr.

    1974-01-01

    Experimental data have been obtained for two series of bodies at Mach 6 and Reynolds numbers, based on model length, from 1.4 million to 9.5 million. One series consisted of axisymmetric power-law bodies geometrically constrained for constant length and base diameter with values of the exponent n of 0.25, 0.5, 0.6, 0.667, 0.75, and 1.0. The other series consisted of positively and negatively cambered bodies of polygonal cross section, each having a constant longitudinal area distribution conforming to that required for minimizing zero-lift wave drag at hypersonic speeds under the geometric constraints of given length and volume. At the highest Reynolds number, the power-law body for minimum drag is blunter (exponent n lower) than predicted by inviscid theory (n approximately 0.6 instead of n = 0.667); however, the peak value of lift-drag ratio occurs at n = 0.667. Viscous effects were present on the bodies of polygonal cross section but were less pronounced than those on the power-law bodies. The trapezoidal bodies with maximum width at the bottom were found to have the highest maximum lift-drag ratio and the lowest mimimum drag.

  16. Scaling of Convex Hull Volume to Body Mass in Modern Primates, Non-Primate Mammals and Birds

    PubMed Central

    Brassey, Charlotte A.; Sellers, William I.

    2014-01-01

    The volumetric method of ‘convex hulling’ has recently been put forward as a mass prediction technique for fossil vertebrates. Convex hulling involves the calculation of minimum convex hull volumes (vol CH) from the complete mounted skeletons of modern museum specimens, which are subsequently regressed against body mass (M b) to derive predictive equations for extinct species. The convex hulling technique has recently been applied to estimate body mass in giant sauropods and fossil ratites, however the biomechanical signal contained within vol CH has remained unclear. Specifically, when vol CH scaling departs from isometry in a group of vertebrates, how might this be interpreted? Here we derive predictive equations for primates, non-primate mammals and birds and compare the scaling behaviour of M b to vol CH between groups. We find predictive equations to be characterised by extremely high correlation coefficients (r 2 = 0.97–0.99) and low mean percentage prediction error (11–20%). Results suggest non-primate mammals scale body mass to vol CH isometrically (b = 0.92, 95%CI = 0.85–1.00, p = 0.08). Birds scale body mass to vol CH with negative allometry (b = 0.81, 95%CI = 0.70–0.91, p = 0.011) and apparent density (vol CH/M b) therefore decreases with mass (r 2 = 0.36, p<0.05). In contrast, primates scale body mass to vol CH with positive allometry (b = 1.07, 95%CI = 1.01–1.12, p = 0.05) and apparent density therefore increases with size (r 2 = 0.46, p = 0.025). We interpret such departures from isometry in the context of the ‘missing mass’ of soft tissues that are excluded from the convex hulling process. We conclude that the convex hulling technique can be justifiably applied to the fossil record when a large proportion of the skeleton is preserved. However we emphasise the need for future studies to quantify interspecific variation in the distribution of soft tissues such as muscle, integument and body fat. PMID:24618736

  17. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  18. Wave-Based Algorithms and Bounds for Target Support Estimation

    DTIC Science & Technology

    2015-05-15

    vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of

  19. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.

  20. Pricing of Water Resources With Depletable Externality: The Effects of Pollution Charges

    NASA Astrophysics Data System (ADS)

    Kitabatake, Yoshifusa

    1990-04-01

    With an abstraction of a real-world situation, the paper views water resources as a depletable capital asset which yields a stream of services such as water supply and the assimilation of pollution discharge. The concept of the concave or convex water resource depletion function is then introduced and applied to a general two-sector, three-factor model. The main theoretical contribution is to prove that when the water resource depletion function is a concave rather than a convex function of pollution, it is more likely that gross regional income will increase with a higher pollution charge policy. The concavity of the function is meant to imply that with an increase in pollution released, the ability of supplying water at a certain minimum quality level diminishes faster and faster. A numerical example is also provided.

  1. National Park Service Vegetation Mapping Inventory Program: Appalachian National Scenic Trail vegetation mapping project

    USGS Publications Warehouse

    Hop, Kevin D.; Strassman, Andrew C.; Hall, Mark; Menard, Shannon; Largay, Ery; Sattler, Stephanie; Hoy, Erin E.; Ruhser, Janis; Hlavacek, Enrika; Dieck, Jennifer

    2017-01-01

    The National Park Service (NPS) Vegetation Mapping Inventory (VMI) Program classifies, describes, and maps existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VMI Program is managed by the NPS I&M Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey Upper Midwest Environmental Sciences Center, NatureServe, NPS Northeast Temperate Network, and NPS Appalachian National Scenic Trail (APPA) have completed vegetation classification and mapping of APPA for the NPS VMI Program.Mappers, ecologists, and botanists collaborated to affirm vegetation types within the U.S. National Vegetation Classification (USNVC) of APPA and to determine how best to map the vegetation types by using aerial imagery. Analyses of data from 1,618 vegetation plots were used to describe USNVC associations of APPA. Data from 289 verification sites were collected to test the field key to vegetation associations and the application of vegetation associations to a sample set of map polygons. Data from 269 validation sites were collected to assess vegetation mapping prior to submitting the vegetation map for accuracy assessment (AA). Data from 3,265 AA sites were collected, of which 3,204 were used to test accuracy of the vegetation map layer. The collective of these datasets affirmed 280 USNVC associations for the APPA vegetation mapping project.To map the vegetation and land cover of APPA, 169 map classes were developed. The 169 map classes consist of 150 that represent natural (including ruderal) vegetation types in the USNVC, 11 that represent cultural (agricultural and developed) vegetation types in the USNVC, 5 that represent natural landscapes with catastrophic disturbance or some other modification to natural vegetation preventing accurate classification in the USNVC, and 3 that represent nonvegetated water (non-USNVC). Features were interpreted from viewing 4-band digital aerial imagery using digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems (GIS). (Digital aerial imagery was collected each fall during 2009–11 to capture leaf-phenology change of hardwood trees across the latitudinal range of APPA.) The interpreted data were digitally and spatially referenced, thus making the spatial-database layers usable in GIS. Polygon units were mapped to either a 0.5-hectare (ha) or 0.25-ha minimum mapping unit, depending on vegetation type or scenario; however, polygon units were mapped to 0.1 ha for alpine vegetation.A geodatabase containing various feature-class layers and tables provide locations and support data to USNVC vegetation types (vegetation map layer), vegetation plots, verification sites, validation sites, AA sites, project boundary extent and zones, and aerial image centers and flight lines. The feature-class layer and related tables of the vegetation map layer provide 30,395 polygons of detailed attribute data covering 110,919.7 ha, with an average polygon size of 3.6 ha; the vegetation map coincides closely with the administrative boundary for APPA.Summary reports generated from the vegetation map layer of the map classes representing USNVC natural (including ruderal) vegetation types apply to 28,242 polygons (92.9% of polygons) and cover 106,413.0 ha (95.9%) of the map extent for APPA. The map layer indicates APPA to be 92.4% forest and woodland (102,480.8 ha), 1.7% shrubland (1866.3 ha), and 1.8% herbaceous cover (2,065.9 ha). Map classes representing park-special vegetation (undefined in the USNVC) apply to 58 polygons (0.2% of polygons) and cover 404.3 ha (0.4%) of the map extent. Map classes representing USNVC cultural types apply to 1,777 polygons (5.8% of polygons) and cover 2,516.3 ha (2.3%) of the map extent. Map classes representing nonvegetated water (non-USNVC) apply to 332 polygons (1.1% of polygons) and cover 1,586.2 ha (1.4%) of the map extent.

  2. Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron

    2008-01-01

    In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.

  3. Visualization of Uncertainty

    NASA Astrophysics Data System (ADS)

    Jones, P. W.; Strelitz, R. A.

    2012-12-01

    The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs at the generating points (effectively the center of the polygon). This methodology readily admits to rigorous statistical analysis using standard components found in R and thus entirely compatible with the visualization package we use (Visit and/or ParaView), the language we use (Python) and the UVCDAT environment that provides the programmer and analyst workbench. We will demonstrate the power and effectiveness of this methodology in climate studies. We will further argue that our method of defining (or predicting) values in a region has many advantages over the traditional visualization notion of value at a point.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klima, Matej; Kucharik, MIlan; Shashkov, Mikhail Jurievich

    We analyze several new and existing approaches for limiting tensor quantities in the context of deviatoric stress remapping in an ALE numerical simulation of elastic flow. Remapping and limiting of the tensor component-by-component is shown to violate radial symmetry of derived variables such as elastic energy or force. Therefore, we have extended the symmetry-preserving Vector Image Polygon algorithm, originally designed for limiting vector variables. This limiter constrains the vector (in our case a vector of independent tensor components) within the convex hull formed by the vectors from surrounding cells – an equivalent of the discrete maximum principle in scalar variables.more » We compare this method with a limiter designed specifically for deviatoric stress limiting which aims to constrain the J 2 invariant that is proportional to the specific elastic energy and scale the tensor accordingly. We also propose a method which involves remapping and limiting the J 2 invariant independently using known scalar techniques. The deviatoric stress tensor is then scaled to match this remapped invariant, which guarantees conservation in terms of elastic energy.« less

  5. Towards automated human gait disease classification using phase space representation of intrinsic mode functions

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Patra, Sayantani; Pratiher, Souvik

    2017-06-01

    A novel analytical methodology for segregating healthy and neurological disorders from gait patterns is proposed by employing a set of oscillating components called intrinsic mode functions (IMF's). These IMF's are generated by the Empirical Mode Decomposition of the gait time series and the Hilbert transformed analytic signal representation forms the complex plane trace of the elliptical shaped analytic IMFs. The area measure and the relative change in the centroid position of the polygon formed by the Convex Hull of these analytic IMF's are taken as the discriminative features. Classification accuracy of 79.31% with Ensemble learning based Adaboost classifier validates the adequacy of the proposed methodology for a computer aided diagnostic (CAD) system for gait pattern identification. Also, the efficacy of several potential biomarkers like Bandwidth of Amplitude Modulation and Frequency Modulation IMF's and it's Mean Frequency from the Fourier-Bessel expansion from each of these analytic IMF's has been discussed for its potency in diagnosis of gait pattern identification and classification.

  6. Plane stress problems using hysteretic rigid body spring network models

    NASA Astrophysics Data System (ADS)

    Christos, Sofianos D.; Vlasis, Koumousis K.

    2017-10-01

    In this work, a discrete numerical scheme is presented capable of modeling the hysteretic behavior of 2D structures. Rigid Body Spring Network (RBSN) models that were first proposed by Kawai (Nucl Eng Des 48(1):29-207, 1978) are extended to account for hysteretic elastoplastic behavior. Discretization is based on Voronoi tessellation, as proposed specifically for RBSN models to ensure uniformity. As a result, the structure is discretized into convex polygons that form the discrete rigid bodies of the model. These are connected with three zero length, i.e., single-node springs in the middle of their common facets. The springs follow the smooth hysteretic Bouc-Wen model which efficiently incorporates classical plasticity with no direct reference to a yield surface. Numerical results for both static and dynamic loadings are presented, which validate the proposed simplified spring-mass formulation. In addition, they verify the model's applicability on determining primarily the displacement field and plastic zones compared to the standard elastoplastic finite element method.

  7. Map Projection Induced Variations in Locations of Polygon Geofence Edges

    NASA Technical Reports Server (NTRS)

    Neeley, Paula; Narkawicz, Anthony

    2017-01-01

    This Paper under-estimates answers to the following question under various constraints: If a geofencing algorithm uses a map projection to determine whether a position is inside/outside a polygon region, how far outside/inside the polygon can the point be and the algorithm determine that it is inside/outside (the opposite and therefore incorrect answer)? Geofencing systems for unmanned aircraft systems (UAS) often model stay-in and stay-out regions using 2D polygons with minimum and maximum altitudes. The vertices of the polygons are typically input as latitude-longitude pairs, and the edges as paths between adjacent vertices. There are numerous ways to generate these paths, resulting in numerous potential locations for the edges of stay-in and stay-out regions. These paths may be geodesics on a spherical model of the earth or geodesics on the WGS84 reference ellipsoid. In geofencing applications that use map projections, these paths are inverse images of straight lines in the projected plane. This projected plane may be a projection of a spherical earth model onto a tangent plane, called an orthographic projection. Alternatively, it may be a projection where the straight lines in the projected plane correspond to straight lines in the latitudelongitude coordinate system, also called a Plate Carr´ee projection. This paper estimates distances between different edge paths and an oracle path, which is a geodesic on either the spherical earth or the WGS84 ellipsoidal earth. This paper therefore estimates how far apart different edge paths can be rather than comparing their path lengths, which are not considered. Rather, the comparision is between the actual locations of the edges between vertices. For edges drawn using orthographic projections, this maximum distance increases as the distance from the polygon vertices to the projection point increases. For edges drawn using Plate Carr´ee projections, this maximum distance increases as the vertices become further from the equator. Distances between geodesics on a spherical earth and a WGS84 ellipsoidal earth are also analyzed, using the WGS84 ellipsoid as the oracle. Bounds on the 2D distance between a straight line and a great circle path, in an orthographically projected plane rather than on the surface of the earth, have been formally verified in the PVS theorem prover, meaning that they are mathematically correct in the absence of floating point errors.

  8. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  9. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  10. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  11. Use of released pigs as sentinels for Mycobacterium bovis.

    PubMed

    Nugent, Graham; Whitford, Jackie; Young, Nigel

    2002-10-01

    Identifying the presence of bovine tuberculosis (TB; Mycobacterium bovis) in wildlife is crucial in guiding management aimed at eradicating the disease from New Zealand. Unfortunately, surveys of the principal wildlife host, the introduced brushtail possum (Trichosurus vulpecula), require large samples (> 95% of the population) before they can provide reasonable confidence that the disease is absent. In this study, we tested the feasibility of using a more wide-ranging species, feral pig (Sus scrofa), as an alternative sentinel capable of indicating TB presence. In January 2000, 17 pigs in four groups were released into a forested area with a low density of possums in which TB was known to be present. The pigs were radiotracked at 2 wk intervals from February to October 2000, and some of them were killed and necropsied at various intervals after release. Of the 15 pigs successfully recovered and necropsied, one killed 2 mo after release had no gross lesions typical of TB, and the only other pig killed at that time had greatly enlarged mandibular lymph nodes. The remainder were killed at longer intervals after release and all had gross lesions typical of TB. Mycobacterium bovis was isolated from all 15 pigs by mycobacterial culture. Home range sizes of pigs varied widely and increased with the length of time the pigs were in the forest, with minimum convex polygon range-size estimates averaging 10.7 km2 (range 4.7-20.3 km2) for the pigs killed after 6 mo. A 6 km radius around the kill site of each pig would have encompassed 95% of all of their previous locations at which they could have become infected. However, one pig shifted 35 km, highlighting the main limitation of using unmarked feral pigs as sentinels. This trial indicates use of resident and/or released free-ranging pigs is a feasible alternative to direct prevalence surveys of possums for detecting TB presence.

  12. Better Few than Hungry: Flexible Feeding Ecology of Collared Lemurs Eulemur collaris in Littoral Forest Fragments

    PubMed Central

    Donati, Giuseppe; Kesch, Kristina; Ndremifidy, Kelard; Schmidt, Stacey L.; Ramanamanjato, Jean-Baptiste; Borgognini-Tarli, Silvana M.; Ganzhorn, Joerg U.

    2011-01-01

    Background Frugivorous primates are known to encounter many problems to cope with habitat degradation, due to the fluctuating spatial and temporal distribution of their food resources. Since lemur communities evolved strategies to deal with periods of food scarcity, these primates are expected to be naturally adapted to fluctuating ecological conditions and to tolerate a certain degree of habitat changes. However, behavioral and ecological strategies adopted by frugivorous lemurs to survive in secondary habitats have been little investigated. Here, we compared the behavioral ecology of collared lemurs (Eulemur collaris) in a degraded fragment of littoral forest of south-east Madagascar, Mandena, with that of their conspecifics in a more intact habitat, Sainte Luce. Methodology/Principal Findings Lemur groups in Mandena and in Sainte Luce were censused in 2004/2007 and in 2000, respectively. Data were collected via instantaneous sampling on five lemur groups totaling 1,698 observation hours. The Shannon index was used to determine dietary diversity and nutritional analyses were conducted to assess food quality. All feeding trees were identified and measured, and ranging areas determined via the minimum convex polygon. In the degraded area lemurs were able to modify several aspects of their feeding strategies by decreasing group size and by increasing feeding time, ranging areas, and number of feeding trees. The above strategies were apparently able to counteract a clear reduction in both food quality and size of feeding trees. Conclusions/Significance Our findings indicate that collared lemurs in littoral forest fragments modified their behavior to cope with the pressures of fluctuating resource availability. The observed flexibility is likely to be an adaptation to Malagasy rainforests, which are known to undergo periods of fruit scarcity and low productivity. These results should be carefully considered when relocating lemurs or when selecting suitable areas for their conservation. PMID:21625557

  13. Home Range and Ranging Behaviour of Bornean Elephant (Elephas maximus borneensis) Females

    PubMed Central

    Alfred, Raymond; Ahmad, Abd Hamid; Payne, Junaidi; Williams, Christy; Ambu, Laurentius Nayan; How, Phua Mui; Goossens, Benoit

    2012-01-01

    Background Home range is defined as the extent and location of the area covered annually by a wild animal in its natural habitat. Studies of African and Indian elephants in landscapes of largely open habitats have indicated that the sizes of the home range are determined not only by the food supplies and seasonal changes, but also by numerous other factors including availability of water sources, habitat loss and the existence of man-made barriers. The home range size for the Bornean elephant had never been investigated before. Methodology/Principal Findings The first satellite tracking program to investigate the movement of wild Bornean elephants in Sabah was initiated in 2005. Five adult female elephants were immobilized and neck collars were fitted with tracking devices. The sizes of their home range and movement patterns were determined using location data gathered from a satellite tracking system and analyzed by using the Minimum Convex Polygon and Harmonic Mean methods. Home range size was estimated to be 250 to 400 km2 in a non-fragmented forest and 600 km2 in a fragmented forest. The ranging behavior was influenced by the size of the natural forest habitat and the availability of permanent water sources. The movement pattern was influenced by human disturbance and the need to move from one feeding site to another. Conclusions/Significance Home range and movement rate were influenced by the degree of habitat fragmentation. Once habitat was cleared or converted, the availability of food plants and water sources were reduced, forcing the elephants to travel to adjacent forest areas. Therefore movement rate in fragmented forest was higher than in the non-fragmented forest. Finally, in fragmented habitat human and elephant conflict occurrences were likely to be higher, due to increased movement bringing elephants into contact more often with humans. PMID:22347469

  14. Evaluation of Argos Telemetry Accuracy in the High-Arctic and Implications for the Estimation of Home-Range Size

    PubMed Central

    Christin, Sylvain; St-Laurent, Martin-Hugues; Berteaux, Dominique

    2015-01-01

    Animal tracking through Argos satellite telemetry has enormous potential to test hypotheses in animal behavior, evolutionary ecology, or conservation biology. Yet the applicability of this technique cannot be fully assessed because no clear picture exists as to the conditions influencing the accuracy of Argos locations. Latitude, type of environment, and transmitter movement are among the main candidate factors affecting accuracy. A posteriori data filtering can remove “bad” locations, but again testing is still needed to refine filters. First, we evaluate experimentally the accuracy of Argos locations in a polar terrestrial environment (Nunavut, Canada), with both static and mobile transmitters transported by humans and coupled to GPS transmitters. We report static errors among the lowest published. However, the 68th error percentiles of mobile transmitters were 1.7 to 3.8 times greater than those of static transmitters. Second, we test how different filtering methods influence the quality of Argos location datasets. Accuracy of location datasets was best improved when filtering in locations of the best classes (LC3 and 2), while the Douglas Argos filter and a homemade speed filter yielded similar performance while retaining more locations. All filters effectively reduced the 68th error percentiles. Finally, we assess how location error impacted, at six spatial scales, two common estimators of home-range size (a proxy of animal space use behavior synthetizing movements), the minimum convex polygon and the fixed kernel estimator. Location error led to a sometimes dramatic overestimation of home-range size, especially at very local scales. We conclude that Argos telemetry is appropriate to study medium-size terrestrial animals in polar environments, but recommend that location errors are always measured and evaluated against research hypotheses, and that data are always filtered before analysis. How movement speed of transmitters affects location error needs additional research. PMID:26545245

  15. Does Plan B work? Home range estimations from stored on board and transmitted data sets produced by GPS-telemetry in the Colombian Amazon.

    PubMed

    Cabrera, Jaime A; Molina, Eduardo; González, Tania; Armenteras, Dolors

    2016-12-01

    Telemetry based on Global Positioning Systems (GPS) makes possible to gather large quantities of information in a very fine scale and work with species that were impossible to study in the past. When working with GPS telemetry, the option of storing data on board could be more desirable than the sole satellite transmitted data, due to the increase in the amount of locations available for analysis. Nonetheless, the uncertainty in the retrieving of the collar unit makes satellite-transmitted technologies something to take into account. Therefore, differences between store-on-board (SoB) and satellite-transmitted (IT) data sets need to be considered. Differences between SoB and IT data collected from two lowland tapirs (Tapirus terrestris), were explored by means of the calculation of home range areas by three different methods: the Minimum Convex Polygon (MCP), the Fixed Kernel Density Estimator (KDE) and the Brownian Bridges (BB). Results showed that SoB and IT data sets for the same individual were similar, with fix ranging from 63 % to 85 % respectively, and 16 m to 17 m horizontal errors. Depending on the total number of locations available for each individual, the home ranges estimated showed differences between 2.7 % and 79.3 %, for the 50 % probability contour and between 9.9 % and 61.8 % for the 95 % probability contour. These differences imply variations in the spatial coincidence of the estimated home ranges. We concluded that the use of IT data is not a good option for the estimation of home range areas if the collar settings have not been designed specifically for this use. Nonetheless, geographical representations of the IT based estimators could be of great help to identify areas of use, besides its assistance to locate the collar for its retrieval at the end of the field season and as a proximate backup when collars disappear.

  16. Inter-nesting movements and habitat-use of adult female Kemp’s ridley turtles in the Gulf of Mexico

    PubMed Central

    Hart, Kristen M.; Fujisaki, Ikuko; Bucklin, David; Iverson, Autumn R.; Rubio, Cynthia; Backof, Thomas F.; Burchfield, Patrick M.; de Jesus Gonzales Diaz Miron, Raul; Dutton, Peter H.; Frey, Amy; Peña, Jaime; Gomez Gamez, Daniel; Martinez, Hector J.; Ortiz, Jaime

    2017-01-01

    Species vulnerability is increased when individuals congregate in restricted areas for breeding; yet, breeding habitats are not well defined for many marine species. Identification and quantification of these breeding habitats are essential to effective conservation. Satellite telemetry and switching state-space modeling (SSM) were used to define inter-nesting habitat of endangered Kemp’s ridley turtles (Lepidochelys kempii) in the Gulf of Mexico. Turtles were outfitted with satellite transmitters after nesting at Padre Island National Seashore, Texas, USA, from 1998 through 2013 (n = 60); Rancho Nuevo, Tamaulipas, Mexico, during 2010 and 2011 (n = 11); and Tecolutla, Veracruz, Mexico, during 2012 and 2013 (n = 11). These sites span the range of nearly all nesting by this species. Inter-nesting habitat lies in a narrow band of nearshore western Gulf of Mexico waters in the USA and Mexico, with mean water depth of 14 to 19 m within a mean distance to shore of 6 to 11 km as estimated by 50% kernel density estimate, α-Hull, and minimum convex polygon methodologies. Turtles tracked during the inter-nesting period moved, on average, 17.5 km/day and a mean total distance of 398 km. Mean home ranges occupied were 725 to 2948 km2. Our results indicate that these nearshore western Gulf waters represent critical inter-nesting habitat for this species, where threats such as shrimp trawling and oil and gas platforms also occur. Up to half of all adult female Kemp’s ridleys occupy this habitat for weeks to months during each nesting season. Because inter-nesting habitat for this species is concentrated in nearshore waters of the western Gulf of Mexico in both Mexico and the USA, international collaboration is needed to protect this essential habitat and the turtles occurring within it. PMID:28319178

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaGory, K. E.; Walston, L. J.; Goulet, C

    The decline of many snake populations is attributable to habitat loss, and knowledge of habitat use is critical to their conservation. Resource characteristics (e.g., relative availability of different habitat types, soils, and slopes) within a landscape are scale-dependent and may not be equal across multiple spatial scales. Thus, it is important to identify the relevant spatial scales at which resource selection occurs. We conducted a radiotelemetry study of eastern hognose snake (Heterodon platirhinos) home range size and resource use at different hierarchical spatial scales. We present the results for 8 snakes radiotracked during a 2-year study at New Boston Airmore » Force Station (NBAFS) in southern New Hampshire, USA, where the species is listed by the state as endangered. Mean home range size (minimum convex polygon) at NBAFS (51.7 {+-} 14.7 ha) was similar to that reported in other parts of the species range. Radiotracked snakes exhibited different patterns of resource use at different spatial scales. At the landscape scale (selection of locations within the landscape), snakes overutilized old-field and forest edge habitats and underutilized forested habitats and wetlands relative to availability. At this scale, snakes also overutilized areas containing sandy loam soils and areas with lower slope (mean slope = 5.2% at snake locations vs. 6.7% at random locations). We failed to detect some of these patterns of resource use at the home range scale (i.e., within the home range). Our ability to detect resource selection by the snakes only at the landscape scale is likely the result of greater heterogeneity in macrohabitat features at the broader landscape scale. From a management perspective, future studies of habitat selection for rare species should include measurement of available habitat at spatial scales larger than the home range. We suggest that the maintenance of open early successional habitats as a component of forested landscapes will be critical for the persistence of eastern hognose snake populations in the northeastern United States.« less

  18. Migration, Foraging, and Residency Patterns for Northern Gulf Loggerheads: Implications of Local Threats and International Movements

    PubMed Central

    Hart, Kristen M.; Lamont, Margaret M.; Sartain, Autumn R.; Fujisaki, Ikuko

    2014-01-01

    Northern Gulf of Mexico (NGoM) loggerheads (Caretta caretta) make up one of the smallest subpopulations of this threatened species and have declining nest numbers. We used satellite telemetry and a switching state-space model to identify distinct foraging areas used by 59 NGoM loggerheads tagged during 2010–2013. We tagged turtles after nesting at three sites, 1 in Alabama (Gulf Shores; n = 37) and 2 in Florida (St. Joseph Peninsula; n = 20 and Eglin Air Force Base; n = 2). Peak migration time was 22 July to 9 August during which >40% of turtles were in migration mode; the mean post-nesting migration period was 23.0 d (±13.8 d SD). After displacement from nesting beaches, 44 turtles traveled to foraging sites where they remained resident throughout tracking durations. Selected foraging locations were variable distances from tagging sites, and in 5 geographic regions; no turtles selected foraging sites outside the Gulf of Mexico (GoM). Foraging sites delineated using 50% kernel density estimation were located a mean distance of 47.6 km from land and in water with mean depth of −32.5 m; other foraging sites, delineated using minimum convex polygons, were located a mean distance of 43.0 km from land and in water with a mean depth of −24.9 m. Foraging sites overlapped with known trawling activities, oil and gas extraction activities, and the footprint of surface oiling during the 2010 Deepwater Horizon oil spill (n = 10). Our results highlight the year-round use of habitats in the GoM by loggerheads that nest in the NGoM. Our findings indicate that protection of females in this subpopulation requires both international collaborations and management of threats that spatially overlap with distinct foraging habitats. PMID:25076053

  19. Computational Role of Tunneling in a Programmable Quantum Annealer

    NASA Technical Reports Server (NTRS)

    Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut

    2016-01-01

    Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.

  20. Microtopographic characterization of ice-wedge polygon landscape in Barrow, Alaska: a digital map of troughs, rims, centers derived from high resolution (0.25 m) LiDAR data

    DOE Data Explorer

    Gangodagamage, Chandana; Wullschleger, Stan

    2014-07-03

    The dataset represents microtopographic characterization of the ice-wedge polygon landscape in Barrow, Alaska. Three microtopographic features are delineated using 0.25 m high resolution digital elevation dataset derived from LiDAR. The troughs, rims, and centers are the three categories in this classification scheme. The polygon troughs are the surface expression of the ice-wedges that are in lower elevations than the interior polygon. The elevated shoulders of the polygon interior immediately adjacent to the polygon troughs are the polygon rims for the low center polygons. In case of high center polygons, these features are the topographic highs. In this classification scheme, both topographic highs and rims are considered as polygon rims. The next version of the dataset will include more refined classification scheme including separate classes for rims ad topographic highs. The interior part of the polygon just adjacent to the polygon rims are the polygon centers.

  1. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.

  2. Roaming behaviour and home range estimation of domestic dogs in Aboriginal and Torres Strait Islander communities in northern Australia using four different methods.

    PubMed

    Dürr, Salome; Ward, Michael P

    2014-11-15

    Disease transmission parameters are the core of epidemic models, but are difficult to estimate, especially in the absence of outbreak data. Investigation of the roaming behaviour, home range (HR) and utilization distribution (UD) can provide the foundation for such parameter estimation in free-ranging animals. The objectives of this study were to estimate HR and UD of 69 domestic dogs in six Aboriginal and Torres Strait Islander communities in northern Australia and to compare four different methods (the minimum convex polygon, MCP; the location-based kernel density estimation, LKDE; the biased random bridge, BRB; and Time Local Convex Hull, T-LoCoH) for investigation of UD and estimating HR sizes. Global positioning system (GPS) collars were attached to community dogs for a period of 1-3 days and positions (fixes) were recorded every minute. Median core HRs (50% isopleth) of the 69 dogs were estimated to range from 0.2 to 0.4 ha and the more extended HR (95% isopleth) to range from 2.5 to 5.3 ha, depending on the method used. The HR and UD shapes were found to be generally circular around the dog owner's house. However, some individuals were found to roam much more with a HR size of 40-104 ha and cover large areas of their community or occasionally beyond. These far roaming dogs are of particular interest for infectious disease transmission. Occasionally, dogs were taken between communities and out of communities for hunting, which enables the contact of dogs between communities and with wildlife (such as dingoes). The BRB and T-LoCoH are the only two methods applied here which integrate the consecutiveness of GPS locations into the analysis, a substantial advantage. The recently developed BRB method produced significantly larger HR estimates than the other two methods; however, the variability of HR sizes was lower compared to the other methods. Advantages of the BRB method include a more realistic analytical approach (kernel density estimation based on movements rather than on locations), possibilities to deal with irregular time periods between consecutive GPS fixes and parameter specification which respects the characteristics of the GPS unit used to collect the data. The BRB method was therefore the most suitable method for UD estimation in this dataset. The results of this study can further be used to contact rates between the dogs within and between communities, a foundation for estimating transmission parameters for canine infectious disease models, such as a rabies spread model in Australia. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  3. Spatially-global integration of closed, fragmented contours by finding the shortest-path in a log-polar representation

    PubMed Central

    Kwon, TaeKyu; Agrawal, Kunal; Li, Yunfeng; Pizlo, Zygmunt

    2015-01-01

    Finding the occluding contours of objects in real 2D retinal images of natural 3D scenes is done by determining, which contour fragments are relevant, and the order in which they should be connected. We developed a model that finds the closed contour represented in the image by solving a shortest path problem that uses a log-polar representation of the image; the kind of representation known to exist in area V1 of the primate cortex. The shortest path in a log-polar representation favors the smooth, convex and closed contours in the retinal image that have the smallest number of gaps. This approach is practical because finding a globally-optimal solution to a shortest path problem is computationally easy. Our model was tested in four psychophysical experiments. In the first two experiments, the subject was presented with a fragmented convex or concave polygon target among a large number of unrelated pieces of contour (distracters). The density of these pieces of contour was uniform all over the screen to minimize spatially-local cues. The orientation of each target contour fragment was randomly perturbed by varying the levels of jitter. Subjects drew a closed contour that represented the target’s contour on a screen. The subjects’ performance was nearly perfect when the jitter-level was low. Their performance deteriorated as jitter-levels were increased. The performance of our model was very similar to our subjects’. In two subsequent experiments, the subject was asked to discriminate a briefly-presented egg-shaped object while maintaining fixation at several different positions relative to the closed contour of the shape. The subject’s discrimination performance was affected by the fixation position in much the same way as the model’s. PMID:26241462

  4. Modelling airborne dispersion for disaster management

    NASA Astrophysics Data System (ADS)

    Musliman, I. A.; Yohnny, L.

    2017-05-01

    Industrial disasters, like any other disasters, can happen anytime, anywhere and in any form. Airborne industrial disaster is a kind of catastrophic event involving the release of particles such as chemicals and industrial wastes into environment in gaseous form, for instance gas leakages. Unlike solid and liquid materials, gases are often colourless and odourless, the particles are too tiny to be visible to the naked eyes; hence it is difficult to identify the presence of the gases and to tell the dispersion and location of the substance. This study is to develop an application prototype to perform simulation modelling on the gas particles to determine the dispersion of the gas particles and to identify the coverage of the affected area. The prototype adopted Lagrangian Particle Dispersion (LPD) model to calculate the position of the gas particles under the influence of wind and turbulent velocity components, which are the induced wind due to the rotation of the Earth, and Convex Hull algorithm to identify the convex points of the gas cloud to form the polygon of the coverage area. The application performs intersection and overlay analysis over a set of landuse data at Pasir Gudang, Johor industrial and residential area. Results from the analysis would be useful to tell the percentage and extent of the affected area, and are useful for the disaster management to evacuate people from the affected area. The developed application can significantly increase efficiency of emergency handling during a crisis. For example, by using a simulation model, the emergency handling can predict what is going to happen next, so people can be well informed and preparations works can be done earlier and better. Subsequently, this application helps a lot in the decision making process.

  5. Method and system for detecting polygon boundaries of structures in images as particle tracks through fields of corners and pixel gradients

    DOEpatents

    Paglieroni, David W [Pleasanton, CA; Manay, Siddharth [Livermore, CA

    2011-12-20

    A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.

  6. Classification of crops across heterogeneous agricultural landscape in Kenya using AisaEAGLE imaging spectroscopy data

    NASA Astrophysics Data System (ADS)

    Piiroinen, Rami; Heiskanen, Janne; Mõttus, Matti; Pellikka, Petri

    2015-07-01

    Land use practices are changing at a fast pace in the tropics. In sub-Saharan Africa forests, woodlands and bushlands are being transformed for agricultural use to produce food for the rapidly growing population. The objective of this study was to assess the prospects of mapping the common agricultural crops in highly heterogeneous study area in south-eastern Kenya using high spatial and spectral resolution AisaEAGLE imaging spectroscopy data. Minimum noise fraction transformation was used to pack the coherent information in smaller set of bands and the data was classified with support vector machine (SVM) algorithm. A total of 35 plant species were mapped in the field and seven most dominant ones were used as classification targets. Five of the targets were agricultural crops. The overall accuracy (OA) for the classification was 90.8%. To assess the possibility of excluding the remaining 28 plant species from the classification results, 10 different probability thresholds (PT) were tried with SVM. The impact of PT was assessed with validation polygons of all 35 mapped plant species. The results showed that while PT was increased more pixels were excluded from non-target polygons than from the polygons of the seven classification targets. This increased the OA and reduced salt-and-pepper effects in the classification results. Very high spatial resolution imagery and pixel-based classification approach worked well with small targets such as maize while there was mixing of classes on the sides of the tree crowns.

  7. Hodograph analysis in aircraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    Cliff, Eugene M.; Seywald, Hans; Bless, Robert R.

    1993-01-01

    An account is given of key geometrical concepts involved in the use of a hodograph as an optimal control theory resource which furnishes a framework for geometrical interpretation of the minimum principle. Attention is given to the effects of different convexity properties on the hodograph, which bear on the existence of solutions and such types of controls as chattering controls, 'bang-bang' control, and/or singular control. Illustrative aircraft trajectory optimization problems are examined in view of this use of the hodograph.

  8. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  9. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  10. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    PubMed

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  11. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    PubMed Central

    Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-01-01

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019

  12. A Frost Enhanced Landscape

    NASA Image and Video Library

    2015-12-23

    The arc of hills in this image from NASA Mars Reconnaissance Orbiter spacecraft is the rim of an old and infilled impact crater. The sediments that were deposited within the crater have since formed polygonal cracks due to repeated cycles of freezing and thawing. The process of polygon formation is common at these polar latitudes, but polygons are not always as striking as they are here. In this image, the polygons have been highlighted by persistent frost in the cracks. The crater rim constrains the polygon formation within the crater close to the rim, creating a spoke and ring pattern of cracks. This leads to more rectangular polygons than those near the center of the crater. The polygons close to the center of the crater display a more typical pattern. A closer look shows some of these central polygons, which have smaller polygons within them, and smaller polygons within those smaller polygons, which makes for a natural fractal. http://photojournal.jpl.nasa.gov/catalog/PIA20289

  13. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.

  14. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  15. Mechanical characterization of disordered and anisotropic cellular monolayers

    NASA Astrophysics Data System (ADS)

    Nestor-Bergmann, Alexander; Johns, Emma; Woolner, Sarah; Jensen, Oliver E.

    2018-05-01

    We consider a cellular monolayer, described using a vertex-based model, for which cells form a spatially disordered array of convex polygons that tile the plane. Equilibrium cell configurations are assumed to minimize a global energy defined in terms of cell areas and perimeters; energy is dissipated via dynamic area and length changes, as well as cell neighbor exchanges. The model captures our observations of an epithelium from a Xenopus embryo showing that uniaxial stretching induces spatial ordering, with cells under net tension (compression) tending to align with (against) the direction of stretch, but with the stress remaining heterogeneous at the single-cell level. We use the vertex model to derive the linearized relation between tissue-level stress, strain, and strain rate about a deformed base state, which can be used to characterize the tissue's anisotropic mechanical properties; expressions for viscoelastic tissue moduli are given as direct sums over cells. When the base state is isotropic, the model predicts that tissue properties can be tuned to a regime with high elastic shear resistance but low resistance to area changes, or vice versa.

  16. Origin of giant Martian polygons

    NASA Technical Reports Server (NTRS)

    Mcgill, George E.; Hills, L. S.

    1992-01-01

    Extensive areas of the Martian northern plains in Utopia and Acidalia planitiae are characterized by 'polygonal terrane'. Polygonal terrane consists of material cut by complex troughs defining a pattern resembling mudcracks, columnar joints, or frost-wedge polygons on earth. However, the Martian polygons are orders of magnitude larger than these potential earth analogues, leading to severe mechanical difficulties for genetic models based on simple analogy arguments. Plate-bending and finite element models indicate that shrinkage of desiccating sediment or cooling volcanics accompanied by differential compaction over buried topography can account for the stresses responsible for polygon troughs as well as the large size of the polygons. Although trough widths and depths relate primarily to shrinkage, the large scale of the polygonl pattern relates to the spacing between topographic elevations on the surface buried beneath polygonal terrane material. Geological relationships favor a sedimentary origin for polygonal terrane material, but our model is not dependent on the specific genesis. Our analysis also suggests that the polygons must have formed at a geologically rapid rate.

  17. Microtopographic and depth controls on active layer chemistry in Arctic polygonal ground

    DOE PAGES

    Newman, Brent D.; Throckmorton, Heather M.; Graham, David E.; ...

    2015-03-24

    Polygonal ground is a signature characteristic of Arctic lowlands, and carbon release from permafrost thaw can alter feedbacks to Arctic ecosystems and climate. This study describes the first comprehensive spatial examination of active layer biogeochemistry that extends across high- and low-centered, ice wedge polygons, their features, and with depth. Water chemistry measurements of 54 analytes were made on surface and active layer pore waters collected near Barrow, Alaska, USA. Significant differences were observed between high- and low-centered polygons suggesting that polygon types may be useful for landscape-scale geochemical classification. However, differences were found for polygon features (centers and troughs) formore » analytes that were not significant for polygon type, suggesting that finer-scale features affect biogeochemistry differently from polygon types. Depth variations were also significant, demonstrating important multidimensional aspects of polygonal ground biogeochemistry. These results have major implications for understanding how polygonal ground ecosystems function, and how they may respond to future change.« less

  18. Evaluating the Variations in the Flood Susceptibility Maps Accuracies due to the Alterations in the Type and Extent of the Flood Inventory

    NASA Astrophysics Data System (ADS)

    Tehrany, M. Sh.; Jones, S.

    2017-10-01

    This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.

  19. Polygons and Their Circles

    ERIC Educational Resources Information Center

    Stephenson, Paul

    2009-01-01

    In order to find its circumference, Archimedes famously boxed the circle between two polygons. Ending the first of a series of articles (MT179) with an aside, Francis Lopez-Real reverses the situation to ask: Which polygons can be boxed between two circles? (The official term for such polygons is "bicentric".) The sides of these polygons are…

  20. Interactions between Polygonal Normal Faults and Larger Normal Faults, Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Pham, T. Q. H.; Withjack, M. O.; Hanafi, B. R.

    2017-12-01

    Polygonal faults, small normal faults with polygonal arrangements that form in fine-grained sedimentary rocks, can influence ground-water flow and hydrocarbon migration. Using well and 3D seismic-reflection data, we have examined the interactions between polygonal faults and larger normal faults on the passive margin of offshore Nova Scotia, Canada. The larger normal faults strike approximately E-W to NE-SW. Growth strata indicate that the larger normal faults were active in the Late Cretaceous (i.e., during the deposition of the Wyandot Formation) and during the Cenozoic. The polygonal faults were also active during the Cenozoic because they affect the top of the Wyandot Formation, a fine-grained carbonate sedimentary rock, and the overlying Cenozoic strata. Thus, the larger normal faults and the polygonal faults were both active during the Cenozoic. The polygonal faults far from the larger normal faults have a wide range of orientations. Near the larger normal faults, however, most polygonal faults have preferred orientations, either striking parallel or perpendicular to the larger normal faults. Some polygonal faults nucleated at the tip of a larger normal fault, propagated outward, and linked with a second larger normal fault. The strike of these polygonal faults changed as they propagated outward, ranging from parallel to the strike of the original larger normal fault to orthogonal to the strike of the second larger normal fault. These polygonal faults hard-linked the larger normal faults at and above the level of the Wyandot Formation but not below it. We argue that the larger normal faults created stress-enhancement and stress-reorientation zones for the polygonal faults. Numerous small, polygonal faults formed in the stress-enhancement zones near the tips of larger normal faults. Stress-reorientation zones surrounded the larger normal faults far from their tips. Fewer polygonal faults are present in these zones, and, more importantly, most polygonal faults in these zones were either parallel or perpendicular to the larger faults.

  1. A Mixed-dimensional Model for Determining the Impact of Permafrost Polygonal Ground Degradation on Arctic Hydrology.

    NASA Astrophysics Data System (ADS)

    Coon, E.; Jan, A.; Painter, S. L.; Moulton, J. D.; Wilson, C. J.

    2017-12-01

    Many permafrost-affected regions in the Arctic manifest a polygonal patterned ground, which contains large carbon stores and is vulnerability to climate change as warming temperatures drive melting ice wedges, polygon degradation, and thawing of the underlying carbon-rich soils. Understanding the fate of this carbon is difficult. The system is controlled by complex, nonlinear physics coupling biogeochemistry, thermal-hydrology, and geomorphology, and there is a strong spatial scale separation between microtopograpy (at the scale of an individual polygon) and the scale of landscape change (at the scale of many thousands of polygons). Physics-based models have come a long way, and are now capable of representing the diverse set of processes, but only on individual polygons or a few polygons. Empirical models have been used to upscale across land types, including ecotypes evolving from low-centered (pristine) polygons to high-centered (degraded) polygon, and do so over large spatial extent, but are limited in their ability to discern causal process mechanisms. Here we present a novel strategy that looks to use physics-based models across scales, bringing together multiple capabilities to capture polygon degradation under a warming climate and its impacts on thermal-hydrology. We use fine-scale simulations on individual polygons to motivate a mixed-dimensional strategy that couples one-dimensional columns representing each individual polygon through two-dimensional surface flow. A subgrid model is used to incorporate the effects of surface microtopography on surface flow; this model is described and calibrated to fine-scale simulations. And critically, a subsidence model that tracks volume loss in bulk ice wedges is used to alter the subsurface structure and subgrid parameters, enabling the inclusion of the feedbacks associated with polygon degradation. This combined strategy results in a model that is able to capture the key features of polygon permafrost degradation, but in a simulation across a large spatial extent of polygonal tundra.

  2. Properties of Tangential and Cyclic Polygons: An Application of Circulant Matrices

    ERIC Educational Resources Information Center

    Leung, Allen; Lopez-Real, Francis

    2003-01-01

    In this paper, the properties of tangential and cyclic polygons proposed by Lopez-Real are proved rigorously using the theory of circulant matrices. In particular, the concepts of slippable tangential polygons and conformable cyclic polygons are defined. It is shown that an n-sided tangential (or cyclic) polygon P[subscript n] with n even is…

  3. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  4. Perceptions and Expected Immediate Reactions to Severe Storm Displays.

    PubMed

    Jon, Ihnji; Huang, Shih-Kai; Lindell, Michael K

    2017-11-09

    The National Weather Service has adopted warning polygons that more specifically indicate the risk area than its previous county-wide warnings. However, these polygons are not defined in terms of numerical strike probabilities (p s ). To better understand people's interpretations of warning polygons, 167 participants were shown 23 hypothetical scenarios in one of three information conditions-polygon-only (Condition A), polygon + tornadic storm cell (Condition B), and polygon + tornadic storm cell + flanking nontornadic storm cells (Condition C). Participants judged each polygon's p s and reported the likelihood of taking nine different response actions. The polygon-only condition replicated the results of previous studies; p s was highest at the polygon's centroid and declined in all directions from there. The two conditions displaying storm cells differed from the polygon-only condition only in having p s just as high at the polygon's edge nearest the storm cell as at its centroid. Overall, p s values were positively correlated with expectations of continuing normal activities, seeking information from social sources, seeking shelter, and evacuating by car. These results indicate that participants make more appropriate p s judgments when polygons are presented in their natural context of radar displays than when they are presented in isolation. However, the fact that p s judgments had moderately positive correlations with both sheltering (a generally appropriate response) and evacuation (a generally inappropriate response) suggests that experiment participants experience the same ambivalence about these two protective actions as people threatened by actual tornadoes. © 2017 Society for Risk Analysis.

  5. Centroid of a Polygon--Three Views.

    ERIC Educational Resources Information Center

    Shilgalis, Thomas W.; Benson, Carol T.

    2001-01-01

    Investigates the idea of the center of mass of a polygon and illustrates centroids of polygons. Connects physics, mathematics, and technology to produces results that serve to generalize the notion of centroid to polygons other than triangles. (KHR)

  6. System and method for the adaptive mapping of matrix data to sets of polygons

    NASA Technical Reports Server (NTRS)

    Burdon, David (Inventor)

    2003-01-01

    A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.

  7. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    NASA Astrophysics Data System (ADS)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  8. An efficient auto TPT stitch guidance generation for optimized standard cell design

    NASA Astrophysics Data System (ADS)

    Samboju, Nagaraj C.; Choi, Soo-Han; Arikati, Srini; Cilingir, Erdem

    2015-03-01

    As the technology continues to shrink below 14nm, triple patterning lithography (TPT) is a worthwhile lithography methodology for printing dense layers such as Metal1. However, this increases the complexity of standard cell design, as it is very difficult to develop a TPT compliant layout without compromising on the area. Hence, this emphasizes the importance to have an accurate stitch generation methodology to meet the standard cell area requirement as defined by the technology shrink factor. In this paper, we present an efficient auto TPT stitch guidance generation technique for optimized standard cell design. The basic idea here is to first identify the conflicting polygons based on the Fix Guidance [1] solution developed by Synopsys. Fix Guidance is a reduced sub-graph containing minimum set of edges along with the connecting polygons; by eliminating these edges in a design 3-color conflicts can be resolved. Once the conflicting polygons are identified using this method, they are categorized into four types [2] - (Type 1 to 4). The categorization is based on number of interactions a polygon has with the coloring links and the triangle loops of fix guidance. For each type a certain criteria for keep-out region is defined, based on which the final stitch guidance locations are generated. This technique provides various possible stitch locations to the user and helps the user to select the best stitch location considering both design flexibility (max. pin access/small area) and process-preferences. Based on this technique, a standard cell library for place and route (P and R) can be developed with colorless data and a stitch marker defined by designer using our proposed method. After P and R, the full chip (block) would contain the colorless data and standard cell stitch markers only. These stitch markers are considered as "must be stitch" candidates. Hence during full chip decomposition it is not required to generate and select the stitch markers again for the complete data; therefore, the proposed method reduces the decomposition time significantly.

  9. A new Euler scheme based on harmonic-polygon approach for solving first order ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Yusop, Nurhafizah Moziyana Mohd; Hasan, Mohammad Khatim; Wook, Muslihah; Amran, Mohd Fahmi Mohamad; Ahmad, Siti Rohaidah

    2017-10-01

    There are many benefits to improve Euler scheme for solving the Ordinary Differential Equation Problems. Among the benefits are simple implementation and low-cost computational. However, the problem of accuracy in Euler scheme persuade scholar to use complex method. Therefore, the main purpose of this research are show the construction a new modified Euler scheme that improve accuracy of Polygon scheme in various step size. The implementing of new scheme are used Polygon scheme and Harmonic mean concept that called as Harmonic-Polygon scheme. This Harmonic-Polygon can provide new advantages that Euler scheme could offer by solving Ordinary Differential Equation problem. Four set of problems are solved via Harmonic-Polygon. Findings show that new scheme or Harmonic-Polygon scheme can produce much better accuracy result.

  10. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    USGS Publications Warehouse

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  11. Home range, den selection and habitat use of Carolina northern flying squirrels (Glaucomys sabrinus coloratus)

    USGS Publications Warehouse

    Diggins, Corinne A.; Silvis, Alexander; Kelly, Christine A.; Ford, W. Mark

    2017-01-01

    Context: Understanding habitat selection is important for determining conservation and management strategies for endangered species. The Carolina northern flying squirrel (CNFS; Glaucomys sabrinus coloratus) is an endangered subspecies found in the high-elevation montane forests of the southern Appalachians, USA. The primary use of nest boxes to monitor CNFS has provided biased information on habitat use for this subspecies, as nest boxes are typically placed in suitable denning habitat.Aims: We conducted a radio-telemetry study on CNFS to determine home range, den site selection and habitat use at multiple spatial scales.Methods: We radio-collared 21 CNFS in 2012 and 2014–15. We tracked squirrels to diurnal den sites and during night-time activity.Key results: The MCP (minimum convex polygon) home range at 95% for males was 5.2 ± 1.2 ha and for females was 4.0 ± 0.7. The BRB (biased random bridge) home range at 95% for males was 10.8 ± 3.8 ha and for females was 8.3 ± 2.1. Den site (n = 81) selection occurred more frequently in montane conifer dominate forests (81.4%) vs northern hardwood forests or conifer–northern hardwood forests (9.9% and 8.7%, respectively). We assessed habitat selection using Euclidean distance-based analysis at the 2nd order and 3rd order scale. We found that squirrels were non-randomly selecting for habitat at both 2nd and 3rd order scales.Conclusions: At both spatial scales, CNFS preferentially selected for montane conifer forests more than expected based on availability on the landscape. Squirrels selected neither for nor against northern hardwood forests, regardless of availability on the landscape. Additionally, CNFS denned in montane conifer forests more than other habitat types.Implications: Our results highlight the importance of montane conifer to CNFS in the southern Appalachians. Management and restoration activities that increase the quality, connectivity and extent of this naturally rare forest type may be important for long-term conservation of this subspecies, especially with the impending threat of anthropogenic climate change.

  12. New Support for Hypotheses of an Ancient Ocean on Mars

    NASA Technical Reports Server (NTRS)

    Oehler, Dorothy Z.; Allen, Carlton C.

    2013-01-01

    A new analog for the giant polygons in the Chryse-Acidalia area suggests that those features may have formed in a major body of water - likely a Late Hesperian to Early Amazonian ocean. This analog -terrestrial polygons in subsea, passive margin basins derives from 3D seismic data that show similar-scale, polygonal fault systems in the subsurface of more than 50 terrestrial offshore basins. The terrestrial and martian polygons share similar sizes, basin-wide distributions, tectonic settings, and association with expected fine-grained sediments. Late Hesperian deposition from outflow floods may have triggered formation of these polygons, by providing thick, rapidly-deposited, fine-grained sediments necessary for polygonal fracturing. The restriction of densely occurring polygons to elevations below approx. -4000 m to -4100 m supports inferences that a body of water controlled their formation. Those same elevations appear to restrict occurrence of polygons in Utopia Planitia, suggesting that this analog may apply also to Utopia and that similar processes may have occurred across the martian lowlands.

  13. Clastic polygonal networks around Lyot crater, Mars: Possible formation mechanisms from morphometric analysis

    NASA Astrophysics Data System (ADS)

    Brooker, L. M.; Balme, M. R.; Conway, S. J.; Hagermann, A.; Barrett, A. M.; Collins, G. S.; Soare, R. J.

    2018-03-01

    Polygonal networks of patterned ground are a common feature in cold-climate environments. They can form through the thermal contraction of ice-cemented sediment (i.e. formed from fractures), or the freezing and thawing of ground ice (i.e. formed by patterns of clasts, or ground deformation). The characteristics of these landforms provide information about environmental conditions. Analogous polygonal forms have been observed on Mars leading to inferences about environmental conditions. We have identified clastic polygonal features located around Lyot crater, Mars (50°N, 30°E). These polygons are unusually large (>100 m diameter) compared to terrestrial clastic polygons, and contain very large clasts, some of which are up to 15 metres in diameter. The polygons are distributed in a wide arc around the eastern side of Lyot crater, at a consistent distance from the crater rim. Using high-resolution imaging data, we digitised these features to extract morphological information. These data are compared to existing terrestrial and Martian polygon data to look for similarities and differences and to inform hypotheses concerning possible formation mechanisms. Our results show the clastic polygons do not have any morphometric features that indicate they are similar to terrestrial sorted, clastic polygons formed by freeze-thaw processes. They are too large, do not show the expected variation in form with slope, and have clasts that do not scale in size with polygon diameter. However, the clastic networks are similar in network morphology to thermal contraction cracks, and there is a potential direct Martian analogue in a sub-type of thermal contraction polygons located in Utopia Planitia. Based upon our observations, we reject the hypothesis that polygons located around Lyot formed as freeze-thaw polygons and instead an alternative mechanism is put forward: they result from the infilling of earlier thermal contraction cracks by wind-blown material, which then became compressed and/or cemented resulting in a resistant fill. Erosion then leads to preservation of these polygons in positive relief, while later weathering results in the fracturing of the fill material to form angular clasts. These results suggest that there was an extensive area of ice-rich terrain, the extent of which is linked to ejecta from Lyot crater.

  14. Quantitative Investigations of Polygonal Ground in Continental Antarctica: Terrestrial Analogues for Polygons on Mars

    NASA Astrophysics Data System (ADS)

    Sassenroth, Cynthia; Hauber, Ernst; Schmitz, Nicole; de Vera, Jean Pierre

    2017-04-01

    Polygonally fractured ground is widespread at middle and high latitudes on Mars. The latitude-dependence and the morphologic similarity to terrestrial patterned ground in permafrost regions may indicate a formation as thermal contraction cracks, but the exact formation mechanisms are still unclear. In particular, it is debated whether freeze-thaw processes and liquid water are required to generate the observed features. This study quantitatively investigates polygonal networks in ice-free parts of continental Antarctica to help distinguishing between different hypotheses of their origin on Mars. The study site is located in the Helliwell Hills in Northern Victoria Land ( 71.73°S/161.38°E) and was visited in the framework of the GANOVEX XI expedition during the austral summer of 2015/2016. The local bedrock consists mostly of sediments (sandstones) of the Beacon Supergroup and mafic igneous intrusions (Ferrar Dolerites). The surfaces are covered by glacial drift consisting of clasts with diverse lithologies. Thermal contraction cracks are ubiquitous. We mapped polygons in the northern part of Helliwell Hills in a GIS environment on the basis of high-resolution satellite images with a pixel size of 50 cm. The measured spatial parameters include polygon area, perimeter, length, width, circularity and aspect. We also analyzed the connectivity of enclosed polygons within a polygon network. The polygons do not display significant local relief, but overall the polygon centers are slightly higher than the bounding cracks (i.e. high-center polygons). Sizes of polygons can vary widely, dependent on the geographical location, between 10m2 and >900m2. In planar and level areas, thermal contraction cracks tend to be well connected as hexagonal or irregular polygonal networks without a preferred alignment. In contrast, polygonal networks on slopes form elongated, orthogonal primary cracks, which are either parallel or transverse to the steepest topographic gradient. During fieldwork, excavations were made in the center of polygons and across the bounding cracks. Typically, the uppermost 40 cm of regolith are dry and unconsolidated. Below that, there is commonly a sharp transition to ice-cemented material or very clear ice with no bubbles. Soil profiles were recorded, and sediment samples were taken and analyzed for their grain size composition with laser diffractometric measurement methods. External factors such as slope gradient and orientation, insolation and composition of surface and subsurface materials were included in the analysis.

  15. Justification of the Shape of a Non-Circular Cross-Section for Drilling With a Roller Cutter

    NASA Astrophysics Data System (ADS)

    Buyalich, Gennady; Husnutdinov, Mikhail

    2017-11-01

    The parameters of the shape of non-circular cross-section affect not only the process of blasting, but also the design of the tool and the process of drilling as well. In the conditions of open-pit mining, it is reasonable to use a roller cutter to produce a non-circular cross-section of blasting holes. With regard to the roller cutter, the impact of the cross-section shape on the oscillations of the axial force arising upon its rotation is determined. It is determined that a polygonal shape with rounded comers of the borehole walls connections and their convex shape, which ensures a smaller range of the total axial force and the torque deflecting the bit from the axis of its rotation is the rational form of the non-circular cross-section of the borehole in terms of bit design. It has been shown that the ratio of the number of cutters to the number of borehole corners must be taken into account when justifying the shape of the cross-section, both from the point of view of the effectiveness of the explosion action and from the point of view of the rational design of the bit.

  16. Small-scale polygons on Mars

    NASA Technical Reports Server (NTRS)

    Lucchitta, B. K.

    1984-01-01

    Polygonal-fracture patterns on the martian surface were discovered on Viking Orbiter images. The polygons are 2-20 km in diameter, much larger than those of known patterned ground on Earth. New observations show, however, that polygons exist on Mars that have diameters similar to those of ice-wedge polygons on Earth (generally a few meters to more than 100 m). Various explanations for the origin of these crustal features are examined; seasonal desiccation and thermal-contraction cracking in ice-rich ground. It is difficult to ascertain whether the polygons are forming today or are relics from the past. The crispness of some crack suggests a recent origin. On the other hand the absence of upturned edges (indicating actively forming ice wedges), the locally disintegrating ground, and a few possible superposed rayed craters indicate that the polygons are not forming at the present.

  17. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  18. A modified two-layer iteration via a boundary point approach to generalized multivalued pseudomonotone mixed variational inequalities.

    PubMed

    Saddeek, Ali Mohamed

    2017-01-01

    Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).

  19. Variational Quantum Tomography with Incomplete Information by Means of Semidefinite Programs

    NASA Astrophysics Data System (ADS)

    Maciel, Thiago O.; Cesário, André T.; Vianna, Reinaldo O.

    We introduce a new method to reconstruct unknown quantum states out of incomplete and noisy information. The method is a linear convex optimization problem, therefore with a unique minimum, which can be efficiently solved with Semidefinite Programs. Numerical simulations indicate that the estimated state does not overestimate purity, and neither the expectation value of optimal entanglement witnesses. The convergence properties of the method are similar to compressed sensing approaches, in the sense that, in order to reconstruct low rank states, it needs just a fraction of the effort corresponding to an informationally complete measurement.

  20. Eshelby's problem of non-elliptical inclusions

    NASA Astrophysics Data System (ADS)

    Zou, Wennan; He, Qichang; Huang, Mojia; Zheng, Quanshui

    2010-03-01

    The Eshelby problem consists in determining the strain field of an infinite linearly elastic homogeneous medium due to a uniform eigenstrain prescribed over a subdomain, called inclusion, of the medium. The salient feature of Eshelby's solution for an ellipsoidal inclusion is that the strain tensor field inside the latter is uniform. This uniformity has the important consequence that the solution to the fundamental problem of determination of the strain field in an infinite linearly elastic homogeneous medium containing an embedded ellipsoidal inhomogeneity and subjected to remote uniform loading can be readily deduced from Eshelby's solution for an ellipsoidal inclusion upon imposing appropriate uniform eigenstrains. Based on this result, most of the existing micromechanics schemes dedicated to estimating the effective properties of inhomogeneous materials have been nevertheless applied to a number of materials of practical interest where inhomogeneities are in reality non-ellipsoidal. Aiming to examine the validity of the ellipsoidal approximation of inhomogeneities underlying various micromechanics schemes, we first derive a new boundary integral expression for calculating Eshelby's tensor field (ETF) in the context of two-dimensional isotropic elasticity. The simple and compact structure of the new boundary integral expression leads us to obtain the explicit expressions of ETF and its average for a wide variety of non-elliptical inclusions including arbitrary polygonal ones and those characterized by the finite Laurent series. In light of these new analytical results, we show that: (i) the elliptical approximation to the average of ETF is valid for a convex non-elliptical inclusion but becomes inacceptable for a non-convex non-elliptical inclusion; (ii) in general, the Eshelby tensor field inside a non-elliptical inclusion is quite non-uniform and cannot be replaced by its average; (iii) the substitution of the generalized Eshelby tensor involved in various micromechanics schemes by the average Eshelby tensor for non-elliptical inhomogeneities is in general inadmissible.

  1. National Park Service Vegetation Inventory Program, Cuyahoga Valley National Park, Ohio

    USGS Publications Warehouse

    Hop, Kevin D.; Drake, J.; Strassman, Andrew C.; Hoy, Erin E.; Menard, Shannon; Jakusz, J.W.; Dieck, J.J.

    2013-01-01

    The National Park Service (NPS) Vegetation Inventory Program (VIP) is an effort to classify, describe, and map existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VIP is managed by the NPS Biological Resources Management Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey (USGS) Vegetation Characterization Program lends a cooperative role in the NPS VIP. The USGS Upper Midwest Environmental Sciences Center, NatureServe, and NPS Cuyahoga Valley National Park (CUVA) have completed vegetation classification and mapping of CUVA.Mappers, ecologists, and botanists collaborated to identify and describe vegetation types within the National Vegetation Classification Standard (NVCS) and to determine how best to map them by using aerial imagery. The team collected data from 221 vegetation plots within CUVA to develop detailed descriptions of vegetation types. Data from 50 verification sites were also collected to test both the key to vegetation types and the application of vegetation types to a sample set of map polygons. Furthermore, data from 647 accuracy assessment (AA) sites were collected (of which 643 were used to test accuracy of the vegetation map layer). These data sets led to the identification of 45 vegetation types at the association level in the NVCS at CUVA.A total of 44 map classes were developed to map the vegetation and general land cover of CUVA, including the following: 29 map classes represent natural/semi-natural vegetation types in the NVCS, 12 map classes represent cultural vegetation (agricultural and developed) in the NVCS, and 3 map classes represent non-vegetation features (open-water bodies). Features were interpreted from viewing color-infrared digital aerial imagery dated October 2010 (during peak leaf-phenology change of trees) via digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems (GIS). The interpreted data were digitally and spatially referenced, thus making the spatial database layers usable in GIS. Polygon units were mapped to either a 0.5 ha or 0.25 ha minimum mapping unit, depending on vegetation type.A geodatabase containing various feature-class layers and tables shows the locations of vegetation types and general land cover (vegetation map), vegetation plot samples, verification sites, AA sites, project boundary extent, and aerial photographic centers. The feature-class layer and relate tables for the CUVA vegetation map provides 4,640 polygons of detailed attribute data covering 13,288.4 ha, with an average polygon size of 2.9 ha.Summary reports generated from the vegetation map layer show map classes representing natural/semi-natural types in the NVCS apply to 4,151 polygons (89.4% of polygons) and cover 11,225.0 ha (84.5%) of the map extent. Of these polygons, the map layer shows CUVA to be 74.4% forest (9,888.8 ha), 2.5% shrubland (329.7 ha), and 7.6% herbaceous vegetation cover (1,006.5 ha). Map classes representing cultural types in the NVCS apply to 435 polygons (9.4% of polygons) and cover 1,825.7 ha (13.7%) of the map extent. Map classes representing non-NVCS units (open water) apply to 54 polygons (1.2% of polygons) and cover 237.7 ha (1.8%) of the map extent.A thematic AA study was conducted of map classes representing natural/semi-natural types in the NVCS. Results present an overall accuracy of 80.7% (kappa index of 79.5%) based on data from 643 of the 647 AA sites. Most individual map-class themes exceed the NPS VIP standard of 80% with a 90% confidence interval.The CUVA vegetation mapping project delivers many geospatial and vegetation data products in hardcopy and/or digital formats. These products consist of an in-depth project report discussing methods and results, which include descriptions and a dichotomous key to vegetation types, map classification and map-class descriptions, and a contingency table showing AA results. The suite of products also includes a database of vegetation plots, verification sites, and AA sites; digital pictures of field sites; field data sheets; aerial photographic imagery; hardcopy and digital maps; and a geodatabase of vegetation types and land cover (map layer), fieldwork locations (vegetation plots, verification sites, and AA sites), aerial photographic index, project boundary, and metadata. All geospatial products are projected in Universal Transverse Mercator, Zone 17, by using the North American Datum of 1983. Information on the NPS VIP and completed park mapping projects are located on the Internet at and .

  2. Constrained map-based inventory estimation

    Treesearch

    Paul C. Van Deusen; Francis A. Roesch

    2007-01-01

    A region can conceptually be tessellated into polygons at different scales or resolutions. Likewise, samples can be taken from the region to determine the value of a polygon variable for each scale. Sampled polygons can be used to estimate values for other polygons at the same scale. However, estimates should be compatible across the different scales. Estimates are...

  3. Area of Lattice Polygons

    ERIC Educational Resources Information Center

    Scott, Paul

    2006-01-01

    A lattice is a (rectangular) grid of points, usually pictured as occurring at the intersections of two orthogonal sets of parallel, equally spaced lines. Polygons that have lattice points as vertices are called lattice polygons. It is clear that lattice polygons come in various shapes and sizes. A very small lattice triangle may cover just 3…

  4. A simple algorithm for computing positively weighted straight skeletons of monotone polygons.

    PubMed

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.

  5. A formational model for the polygonal terrains of Mars: Taking a crack at the genesis of the Martian polygons

    NASA Technical Reports Server (NTRS)

    Wenrich, M. L.; Christensen, P. R.

    1993-01-01

    The mechanism for the genesis of the polygonal terrains in Acidalia and Utopia Planitia has long been sought: however, no completely satisfying model was put forth that characterizes the evolution of these complexly patterned terrains. The polygons are roughly hexagonal but some are not entirely enclosed by fractures. These polygonal features range in widths from approximately 5 to 20 km. Several origins were proposed that describe the polygon borders as desiccation cracks, columnar jointing in a cooled lava, or frost-wedge features. These tension-induced cracking hypotheses were addressed by Pechmann, who convincingly disputes these mechanisms of formation based on scale magnitude difficulties and morphology. Pechmann suggests instead that the cracks delineating the 5-20-km-wide polygons on the northern plains of Mars are graben resulting from deep-seated, uniform, horizontal tension. The difficulty with this hypothesis is that no analogous polygonal forms are known to have originated by tectonism on Earth. McGill and Hills propose that the polygonal terrains on Mars resulted from either rapid desiccation of sediments or cooling of volcanics coupled with differential compaction of the material over a buried irregular topographic surface. They suggest that fracturing was enhanced over the areas of positive relief and was suppressed above the topographic lows. McGill and Hills suggest that the spacing of the topographic highs primarily controls the size of the Martian polygons and the physics of the shrinkage process is a secondary concern. Ray et. al. conducted a terrestrial study of patterned ground in periglacial areas of the U.S. to determine the process responsible for polygonal ground formation. They developed a model for polygon formation in which convection of seasonal melt water above a permafrost layer, driven by an unstable density stratification, differentially melts the permafrost interface, causing it to become undulatory.

  6. Giant polygons and mounds in the lowlands of Mars: signatures of an ancient ocean?

    PubMed

    Oehler, Dorothy Z; Allen, Carlton C

    2012-06-01

    This paper presents the hypothesis that the well-known giant polygons and bright mounds of the martian lowlands may be related to a common process-a process of fluid expulsion that results from burial of fine-grained sediments beneath a body of water. Specifically, we hypothesize that giant polygons and mounds in Chryse and Acidalia Planitiae are analogous to kilometer-scale polygons and mud volcanoes in terrestrial, marine basins and that the co-occurrence of masses of these features in Chryse and Acidalia may be the signature of sedimentary processes in an ancient martian ocean. We base this hypothesis on recent data from both Earth and Mars. On Earth, 3-D seismic data illustrate kilometer-scale polygons that may be analogous to the giant polygons on Mars. The terrestrial polygons form in fine-grained sediments that have been deposited and buried in passive-margin, marine settings. These polygons are thought to result from compaction/dewatering, and they are commonly associated with fluid expulsion features, such as mud volcanoes. On Mars, in Chryse and Acidalia Planitiae, orbital data demonstrate that giant polygons and mounds have overlapping spatial distributions. There, each set of features occurs within a geological setting that is seemingly analogous to that of the terrestrial, kilometer-scale polygons (broad basin of deposition, predicted fine-grained sediments, and lack of significant horizontal stress). Regionally, the martian polygons and mounds both show a correlation to elevation, as if their formation were related to past water levels. Although these observations are based on older data with incomplete coverage, a similar correlation to elevation has been established in one local area studied in detail with newer higher-resolution data. Further mapping with the latest data sets should more clearly elucidate the relationship(s) of the polygons and mounds to elevation over the entire Chryse-Acidalia region and thereby provide more insight into this hypothesis.

  7. Ice-Wedge Polygon Formation Impacts Permafrost Carbon Storage and Vulnerability to Top-Down Thaw in Arctic Coastal Plain Soils

    NASA Astrophysics Data System (ADS)

    Jastrow, J. D.; Matamala, R.; Ping, C. L.; Vugteveen, T. W.; Lederhouse, J. S.; Michaelson, G. J.; Mishra, U.

    2017-12-01

    Ice-wedge polygons are ubiquitous, patterned ground features throughout Arctic coastal plains and river deltas. The progressive expansion of ice wedges influences polygon development and strongly affects cryoturbation and soil formation. Thus, we hypothesized that polygon type impacts the distribution and composition of soil organic carbon (C) stocks across the landscape and that such information can improve estimates of permafrost C stocks vulnerable to active layer thickening and increased decomposition due to climatic change. We quantified the distribution of soil C across entire polygon profiles (2-m depth) for three developmental types - flat-centered (FCP), low-centered (LCP), and high-centered (HCP) polygons (3 replicates of each) - formed on glaciomarine sediments within and near the Barrow Environmental Observatory at the northern tip of Alaska. Active layer thickness averaged 45 cm and did not vary among polygon types. Similarly, active layer C stocks were unaffected by polygon type, but permafrost C stocks increased from FCPs to LCPs to HCPs despite greater ice volumes in HCPs. These differences were due to a greater presence of organic horizons in the upper permafrost of LCPs and, especially, HCPs. On average, C stocks in polygon interiors were double those of troughs, on a square meter basis. However, HCPs were physically smaller than LCPs and FCPs, which affected estimates of C stocks at the landscape scale. Accounting for the number of polygons per unit area and the proportional distribution of troughs versus interiors, we estimated permafrost C stocks (2-m depth) increased from 259 Mg C ha-1 in FCPs to 366 Mg C ha-1 in HCPs. Active layer C stocks did not differ among polygon types and averaged 328 Mg C ha-1. We used our detailed polygon profiles to investigate the impact of active layer deepening as projected by Earth system models under future climate scenarios. Because HCPs have a greater proportion of upper permafrost C stocks in organic horizons, permafrost C in areas dominated by this polygon type may be at greater risk for destabilization. Thus, accounting for geospatial distributions of ice-wedge polygon types and associated variations in C stocks and composition could improve observational estimates of regional C stocks and their vulnerability to changing climatic conditions.

  8. Second-order optimality conditions for problems with C1 data

    NASA Astrophysics Data System (ADS)

    Ginchev, Ivan; Ivanov, Vsevolod I.

    2008-04-01

    In this paper we obtain second-order optimality conditions of Karush-Kuhn-Tucker type and Fritz John one for a problem with inequality constraints and a set constraint in nonsmooth settings using second-order directional derivatives. In the necessary conditions we suppose that the objective function and the active constraints are continuously differentiable, but their gradients are not necessarily locally Lipschitz. In the sufficient conditions for a global minimum we assume that the objective function is differentiable at and second-order pseudoconvex at , a notion introduced by the authors [I. Ginchev, V.I. Ivanov, Higher-order pseudoconvex functions, in: I.V. Konnov, D.T. Luc, A.M. Rubinov (Eds.), Generalized Convexity and Related Topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, 2007, pp. 247-264], the constraints are both differentiable and quasiconvex at . In the sufficient conditions for an isolated local minimum of order two we suppose that the problem belongs to the class C1,1. We show that they do not hold for C1 problems, which are not C1,1 ones. At last a new notion parabolic local minimum is defined and it is applied to extend the sufficient conditions for an isolated local minimum from problems with C1,1 data to problems with C1 one.

  9. Isometric deformations of planar quadrilaterals with constant index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaputryaeva, E S

    We consider isometric deformations (motions) of polygons (so-called carpenter's rule problem) in the case of self-intersecting polygons with the additional condition that the index of the polygon is preserved by the motion. We provide general information about isometric deformations of planar polygons and give a complete solution of the carpenter's problem for quadrilaterals. Bibliography: 17 titles.

  10. Programming in Polygon R&D: Explorations with a Spatial Language II

    ERIC Educational Resources Information Center

    Morey, Jim

    2006-01-01

    This paper introduces the language associated with a polygon microworld called Polygon R&D, which has the mathematical crispness of Logo and has the discreteness and simplicity of a Turing machine. In this microworld, polygons serve two purposes: as agents (similar to the turtles in Logo), and as data (landmarks in the plane). Programming the…

  11. Random packing of regular polygons and star polygons on a flat two-dimensional surface.

    PubMed

    Cieśla, Michał; Barbasz, Jakub

    2014-08-01

    Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.

  12. Tessellating the Sphere with Regular Polygons

    ERIC Educational Resources Information Center

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  13. Calculating the Areas of Polygons with a Smartphone Light Sensor

    ERIC Educational Resources Information Center

    Kapucu, Serkan; Simsek, Mertkan; Öçal, Mehmet Fatih

    2017-01-01

    This study explores finding the areas of polygons with a smartphone light sensor. A square and an irregular pentagon were chosen as our polygons. During the activity, the LED light was placed at the vertices of our polygons, and the illuminance values of this LED light were detected by the smartphone light sensor. The smartphone was placed on a…

  14. National Park Service Vegetation Mapping Inventory Program: Natchez Trace Parkway vegetation mapping project report

    USGS Publications Warehouse

    Hop, Kevin D.; Strassman, Andrew C.; Nordman, Carl; Pyne, Milo; White, Rickie; Jakusz, Joseph; Hoy, Erin E.; Dieck, Jennifer

    2016-01-01

    The National Park Service (NPS) Vegetation Mapping Inventory (VMI) Program is an effort to classify, describe, and map existing vegetation of national park units for the NPS Natural Resource Inventory and Monitoring (I&M) Program. The NPS VMI Program is managed by the NPS I&M Division and provides baseline vegetation information to the NPS Natural Resource I&M Program. The U.S. Geological Survey Upper Midwest Environmental Sciences Center, NatureServe, NPS Gulf Coast Network, and NPS Natchez Trace Parkway (NATR; also referred to as Parkway) have completed vegetation classification and mapping of NATR for the NPS VMI Program.Mappers, ecologists, and botanists collaborated to affirm vegetation types within the U.S. National Vegetation Classification (USNVC) of NATR and to determine how best to map them by using aerial imagery. Analyses of data from 589 vegetation plots had been used to describe an initial 99 USNVC associations in the Parkway; this classification work was completed prior to beginning this NATR vegetation mapping project. Data were collected during this project from another eight quick plots to support new vegetation types not previously identified at the Parkway. Data from 120 verification sites were collected to test the field key to vegetation associations and the application of vegetation associations to a sample set of map polygons. Furthermore, data from 900 accuracy assessment (AA) sites were collected (of which 894 were used to test accuracy of the vegetation map layer). The collective of all these datasets resulted in affirming 122 USNVC associations at NATR.To map the vegetation and open water of NATR, 63 map classes were developed. including the following: 54 map classes represent natural (including ruderal) vegetation types in the USNVC, 5 map classes represent cultural (agricultural and developed) vegetation types in the USNVC, 3 map classes represent nonvegetation open-water bodies (non-USNVC), and 1 map class represents landscapes that had received tornado damage a few months prior to the time of aerial imagery collection. Features were interpreted from viewing 4-band digital aerial imagery by means of digital onscreen three-dimensional stereoscopic workflow systems in geographic information systems. (The aerial imagery was collected during mid-October 2011 for the northern reach of the Parkway and mid-November 2011 for the southern reach of the Parkway to capture peak leaf-phenology of trees.) The interpreted data were digitally and spatially referenced, thus making the spatial-database layers usable in geographic information systems. Polygon units were mapped to either a 0.5 hectare (ha) or 0.25 ha minimum mapping unit, depending on vegetation type or scenario.A geodatabase containing various feature-class layers and tables present the locations of USNVC vegetation types (vegetation map), vegetation plot samples, verification sites, AA sites, project boundary extent, and aerial image centers. The feature-class layer and related tables for the vegetation map provide 13,529 polygons of detailed attribute data covering 21,655.5 ha, with an average polygon size of 1.6 ha; the vegetation map coincides closely with the administrative boundary for NATR.Summary reports generated from the vegetation map layer of the map classes representing USNVC natural (including ruderal) vegetation types apply to 12,648 polygons (93.5% of polygons) and cover 18,542.7 ha (85.6%) of the map extent for NATR. The map layer indicates the Parkway to be 70.5% forest and woodland (15,258.7 ha), 0.3% shrubland (63.0 ha), and 14.9% herbaceous cover (3,221.0 ha). Map classes representing USNVC cultural types apply to 678 polygons (5.0% of polygons) and cover 2,413.9 ha (11.1%) of the map extent.

  15. Martian Oceans: Old Debate - New Insights

    NASA Technical Reports Server (NTRS)

    Oehler, Dorothy Z.; Allen, Carlton C.

    2014-01-01

    The possibility of an ancient ocean in the northern lowlands of Mars has been discussed for decades [1-14], but the subject remains controversial [15-20]. Among the many unique features of the northern lowlands is the extensive development of "giant polygons" - polygonal landforms that range from 1 to 20 km across. The kilometer-scale size of these features distinguishes them from a variety of smaller polygons (usually < 250 m) on Mars that have been compared to terrestrial analogs such as ice-wedge and desiccation features. However, until recently, geologists were aware of no examples of polygons on Earth comparable in scale to the giant polygons of Mars, so there were no good analogs from which to draw interpretations. That picture has changed with 3D seismic data acquired by the petroleum industry in exploration of offshore basins. The new data reveal kilometer-scale polygonal features in more than 50 offshore basins on Earth]. These features provide a credible analog for the giant polygons of Mars.

  16. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  17. Perceptually stable regions for arbitrary polygons.

    PubMed

    Rocha, J

    2003-01-01

    Zou and Yan have recently developed a skeletonization algorithm of digital shapes based on a regularity/singularity analysis; they use the polygon whose vertices are the boundary pixels of the image to compute a constrained Delaunay triangulation (CDT) in order to find local symmetries and stable regions. Their method has produced good results but it is slow since its complexity depends on the number of contour pixels. This paper presents an extension of their technique to handle arbitrary polygons, not only polygons of short edges. Consequently, not only can we achieve results as good as theirs for digital images, but we can also compute skeletons of polygons of any number of edges. Since we can handle polygonal approximations of figures, the skeletons are more resilient to noise and faster to process.

  18. Linking of uniform random polygons in confined spaces

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.

    2007-03-01

    In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .

  19. A digital map of the high center (HC) and low center (LC) polygon boundaries delineated from high resolution LiDAR data for Barrow, Alaska

    DOE Data Explorer

    Gangodagamage, Chandana; Wullschleger, Stan

    2014-07-03

    This dataset represent a map of the high center (HC) and low center (LC) polygon boundaries delineated from high resolution LiDAR data for the arctic coastal plain at Barrow, Alaska. The polygon troughs are considered as the surface expression of the ice-wedges. The troughs are in lower elevations than the interior polygon. The trough widths were initially identified from LiDAR data, and the boundary between two polygons assumed to be located along the lowest elevations on trough widths between them.

  20. Center for Automatic Target Recognition Research. Delivery Order 0005: Image Georegistration, Camera Calibration, and Dismount Categorization in Support of DEBU from Layered Sensing

    DTIC Science & Technology

    2011-07-01

    rendering of a subject using 316,691 polygon faces and 161,951 points. The small white dots on the surface of the subject are landmark points. The...Figure 17: CAESAR Data. The leftmost image is a color polygon rendering of a subject using 316,691 polygon faces and 161,951 points. The small white...polygon rendering of a subject using 316,691 polygon faces and 161,951 points. The small white dots on the surface of the subject are landmark points

  1. Distribution and Degradation State of Soil Organic Carbon Stocks in Ice Wedge Polygons of the Arctic Coastal Plain, Alaska

    NASA Astrophysics Data System (ADS)

    Jastrow, J. D.; Ping, C. L.; Deck, C. B.; Matamala, R.; Vugteveen, T. W.; Lederhouse, J. S.; Michaelson, G. J.

    2016-12-01

    Estimates of the amount of organic carbon (C) stored in permafrost-region soils and its susceptibility to mobilization with changing climate are improving but remain high, affecting the ability to reliably predict regional C-climate feedbacks. In lowland permafrost soils, much of the organic matter exists in a poorly degraded state and is often weakly associated with soil minerals due to the cold, wet environment and cryoturbation. Thus, the impacts of warming and permafrost thaw likely will depend, at least initially, on the past history of soil organic matter (SOM) degradation. Ice wedge polygons are ubiquitous, patterned ground features throughout Arctic coastal plain regions and are large enough (5-30 m across) that a better three-dimensional understanding of their C stocks and relative degradation state could improve geospatial upscaling of observational data and contribute benchmarks for constraining model parameters. We investigated the distribution and existing degradation state of SOM to a depth of 2 meters across three polygon types on the Arctic Coastal Plain of Alaska: flat-centered (FCP), low-centered (LCP), and high-centered (HCP) polygons, with each type replicated 3 times. To assess the relative degradation state of SOM, we used particle size fractionation to isolate fibric (coarse) from more degraded (fine) particulate organic matter and separated mineral-associated organic matter into silt- and clay-sized fractions. We found variations in the thickness and quality of surface organic layers for different polygon types. Below the active layer, organic-rich cryoturbated layers were located in the transition zone and fingered down into the upper permafrost. Soil organic C stocks varied across individual polygons and differed among polygon types, with HCPs generally having the largest C stocks. The relative degradation state of SOM also varied spatially and vertically within polygons and differed among polygon types. Our findings suggest that accounting for polygon-scale (wedge to center to wedge) and landscape-scale (polygon type) variations could help reduce the uncertainties in observational estimates of soil C stocks and their degradation state for areas dominated by ice wedge polygons.

  2. Meter-scale thermal contraction crack polygons on the nucleus of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Auger, A.-T.; Groussin, O.; Jorda, L.; El-Maarry, M. R.; Bouley, S.; Séjourné, A.; Gaskell, R.; Capanna, C.; Davidsson, B.; Marchi, S.; Höfner, S.; Lamy, P. L.; Sierks, H.; Barbieri, C.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Agarwal, J.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Bertini, I.; Cremonese, G.; Da Deppo, V.; Debei, S.; De Cecco, M.; Fornasier, S.; Fulle, M.; Gutiérrez, P. J.; Güttler, C.; Hviid, S.; Ip, W.-H.; Knollenberg, J.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Massironi, M.; Michalik, H.; Naletto, G.; Oklay, N.; Pommerol, A.; Sabau, L.; Thomas, N.; Tubiana, C.; Vincent, J.-B.; Wenzel, K.-P.

    2018-02-01

    We report on the detection and characterization of more than 6300 polygons on the surface of the nucleus of comet 67P/Churyumov-Gerasimenko, using images acquired by the OSIRIS camera onboard Rosetta between August 2014 and March 2015. They are found in consolidated terrains and grouped in localized networks. They are present at all latitudes (from North to South) and longitudes (head, neck, and body), sometimes on pit walls or following lineaments. About 1.5% of the observed surface is covered by polygons. Polygons have an homogeneous size across the nucleus, with 90% of them in the size range 1 - 5 m and a mean size of 3.0 ± 1.4 m. They show different morphologies, depending on the width and depth of their trough. They are found in networks with 3- or 4-crack intersection nodes. The polygons observed on 67P are consistent with thermal contraction crack polygons formed by the diurnal or seasonal temperature variations in a hard (MPa) and consolidated sintered layer of water ice, located a few centimeters below the surface. Our thermal analysis shows an evolution of thermal contraction crack polygons according to the local thermal environment, with more evolved polygons (i.e. deeper and larger troughs) where the temperature and the diurnal and seasonal temperature range are the highest. Thermal contraction crack polygons are young surface morphologies that probably formed after the injection of 67P in the inner solar system, typically 100,000 years ago, and could be as young as a few orbital periods, following the decreasing of its perihelion distance in 1959 from 2.7 to 1.3 a.u. Meter scale thermal contraction crack polygons should be common features on the nucleus of Jupiter family comets.

  3. CONVEX mini manual

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.

    1993-01-01

    The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.

  4. South Polar Polygons

    NASA Technical Reports Server (NTRS)

    2004-01-01

    4 March 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a spectacular suite of large and small polygons in the south polar region. On Earth, polygons such as these would be indicators of the presence of ground ice. Whether this is true for Mars remains to be determined, but it is interesting to note that these polygons do occur in a region identified by the Mars Odyssey Gamma Ray Spectrometer (GRS) team as a place with possible ground ice. The polygons are in an old impact crater located near 62.9oS, 281.4oW. This 1.5 meter (5 ft.) per pixel view covers an area 3 km (1.9 mi) wide and is illuminated by sunlight from the upper left. To see the smaller set of polygons, the reader must view the full-resolution image (click on picture, above).

  5. Ares Vallis Polygons

    NASA Image and Video Library

    2002-12-04

    The jumble of eroded ridges and mesas seen in this NASA Mars Odyssey image occurs within Ares Vallis, one of the largest catastrophic outflow channels on the planet. Floods raged through this channel, pouring out into the Chryse Basin to the north. Close inspection of the THEMIS image reveals polygonal shapes on the floor of the channel system. Polygonal terrain on Mars is fairly common although the variety of forms and scales of the polygons suggests multiple modes of origin. Those in Ares Vallis resemble giant desiccation polygons that form in soils on Earth when a moist layer at depth drys out. While polygons can form in icy soils (permafrost) and even lava flows, their presence in a channel thought to have been carved by flowing water is at least consistent with a mode of origin that involved liquid water. http://photojournal.jpl.nasa.gov/catalog/PIA04019

  6. Nonlinear regimes on polygonal hydraulic jumps

    NASA Astrophysics Data System (ADS)

    Rojas, Nicolas

    2016-11-01

    This work extends previous leading and higher order results on the polygonal hydraulic jump in the framework of inertial lubrication theory. The rotation of steady polygonal jumps is observed in the transition from one wavenumber to the next one, induced by a change in height of an external obstacle near the outer edge. In a previous publication, the study of stationary polygons is considered under the assumption that the reference frame rotates with the polygons when the number of corners change, in order to preserve their orientation. In this research work I provide a Hamiltonian approach and the stability analysis of the nonlinear oscillator that describe the polygonal structures at the jump interface, in addition to a perturbation method that enables to explain, for instance, the diversity of patterns found in experiments. GRASP, Institute of Physics, University of Liege, Belgium.

  7. Processing convexity and concavity along a 2-D contour: figure-ground, structural shape, and attention.

    PubMed

    Bertamini, Marco; Wagemans, Johan

    2013-04-01

    Interest in convexity has a long history in vision science. For smooth contours in an image, it is possible to code regions of positive (convex) and negative (concave) curvature, and this provides useful information about solid shape. We review a large body of evidence on the role of this information in perception of shape and in attention. This includes evidence from behavioral, neurophysiological, imaging, and developmental studies. A review is necessary to analyze the evidence on how convexity affects (1) separation between figure and ground, (2) part structure, and (3) attention allocation. Despite some broad agreement on the importance of convexity in these areas, there is a lack of consensus on the interpretation of specific claims--for example, on the contribution of convexity to metric depth and on the automatic directing of attention to convexities or to concavities. The focus is on convexity and concavity along a 2-D contour, not convexity and concavity in 3-D, but the important link between the two is discussed. We conclude that there is good evidence for the role of convexity information in figure-ground organization and in parsing, but other, more specific claims are not (yet) well supported.

  8. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  9. Profile convexities in bedrock and alluvial streams

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan D.; Lutz, J. David

    2008-12-01

    Longitudinal profiles of bedrock streams in central Kentucky, and of coastal plain streams in southeast Texas, were analyzed to determine the extent to which they exhibit smoothly concave profiles and to relate profile convexities to environmental controls. None of the Kentucky streams have smoothly concave profiles. Because all observed knickpoints are associated with vertical joints, if they are migrating it either occurs rapidly between vertical joints, or migrating knickpoints become stalled at structural features. These streams have been adjusting to downcutting of the Kentucky River for at least 1.3 Ma, suggesting that the time required to produce a concave profile is long compared to the typical timescale of environmental change. A graded concave longitudinal profile is not a reasonable prediction or benchmark condition for these streams. The characteristic profile forms of the Kentucky River gorge area are contingent on a particular combination of lithology, structure, hydrologic regime, and geomorphic history, and therefore do not represent any general type of equilibrium state. Few stream profiles in SE Texas conform to the ideal of the smoothly, strongly concave profile. Major convexities are caused by inherited topography, geologic controls, recent and contemporary geomorphic processes, and anthropic effects. Both the legacy of Quaternary environmental change and ongoing changes make it unlikely that consistent boundary conditions will exist for long. Further, the few exceptions within the study area-i.e., strongly and smoothly concave longitudinal profiles-suggest that ample time has occurred for strongly concave profiles to develop and that such profiles do not necessarily represent any mutual adjustments between slope, transport capacity, and sediment supply. The simplest explanation of any tendency toward concavity is related to basic constraints on channel steepness associated with geomechanical stability and minimum slopes necessary to convey flow. This constrained gradient concept (CGC) can explain the general tendency toward concavity in channels of sufficient size, with minimal lithological constraints and with sufficient time for adjustment. Unlike grade- or equilibrium-based theories, the CGC results in interpretations of convex or low-concavity profiles or reaches in terms of local environmental constraints and geomorphic histories rather than as "disequilibrium" features.

  10. Small-Scale Polygons and the History of Ground Ice on Mars

    NASA Technical Reports Server (NTRS)

    Mellon, Michael T.

    2000-01-01

    This research has laid a foundation for continued study of permafrost polygons on Mars using the models and understanding discussed here. Further study of polygonal patterns on Mars is proceeding (under new funding) which is expected to reveal more results about the origin of observed martian polygons and what information they contain regarding the recent history of tile martian climate and of water ice on Mars.

  11. Mathematical analysis on the cosets of subgroup in the group of E-convex sets

    NASA Astrophysics Data System (ADS)

    Abbas, Nada Mohammed; Ajeena, Ruma Kareem K.

    2018-05-01

    In this work, analyzing the cosets of the subgroup in the group of L – convex sets is presented as a new and powerful tool in the topics of the convex analysis and abstract algebra. On L – convex sets, the properties of these cosets are proved mathematically. Most important theorem on a finite group of L – convex sets theory which is the Lagrange’s Theorem has been proved. As well as, the mathematical proof of the quotient group of L – convex sets is presented.

  12. Surface Aesthetics and Analysis.

    PubMed

    Çakır, Barış; Öreroğlu, Ali Rıza; Daniel, Rollin K

    2016-01-01

    Surface aesthetics of an attractive nose result from certain lines, shadows, and highlights with specific proportions and breakpoints. Analysis emphasizes geometric polygons as aesthetic subunits. Evaluation of the complete nasal surface aesthetics is achieved using geometric polygons to define the existing deformity and aesthetic goals. The relationship between the dome triangles, interdomal triangle, facet polygons, and infralobular polygon are integrated to form the "diamond shape" light reflection on the nasal tip. The principles of geometric polygons allow the surgeon to analyze the deformities of the nose, define an operative plan to achieve specific goals, and select the appropriate operative technique. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Sex differences in mental rotation with polygons of different complexity: Do men utilize holistic processes whereas women prefer piecemeal ones?

    PubMed

    Heil, Martin; Jansen-Osmann, Petra

    2008-05-01

    Sex differences in mental rotation were investigated as a function of stimulus complexity with a sample size of N = 72. Replicating earlier findings with polygons, mental rotation was faster for males than for females, and reaction time increased with more complex polygons. Additionally, sex differences increased for complex polygons. Most importantly, however, mental rotation speed decreased with increasing complexity for women but did not change for men. Thus, the sex effects reflect a difference in strategy, with women mentally rotating the polygons in an analytic, piecemeal fashion and men using a holistic mode of mental rotation.

  14. Surface and Active Layer Pore Water Chemistry from Ice Wedge Polygons, Barrow, Alaska, 2013-2014

    DOE Data Explorer

    David E. Graham; Baohua Gu; Elizabeth M. Herndon; Stan D. Wullschleger; Ziming Yang; Liyuan Liang

    2016-11-10

    This data set reports the results of spatial surveys of aqueous geochemistry conducted at Intensive Site 1 of the Barrow Environmental Observatory in 2013 and 2014 (Herndon et al., 2015). Surface water and soil pore water samples were collected from multiple depths within the tundra active layer of different microtopographic features (troughs, ridges, center) of a low-centered polygon (area A), high-centered polygon (area B), flat-centered polygon (area C), and transitional polygon (area D). Reported analytes include dissolved organic and inorganic carbon, dissolved carbon dioxide and methane, major inorganic anions, and major and minor cations.

  15. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Ji, Haibo

    2016-07-01

    The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.

  16. A model for the origin of Martian polygonal terrain

    NASA Technical Reports Server (NTRS)

    Mcgill, G. E.

    1993-01-01

    Extensive areas of the Martian northern plains in Utopia and Acidalia Planitiae are characterized by 'polygonal terrain.' Polygonal terrain consists of material cut by complex troughs defining a pattern resembling mudcracks, columnar joints, or frost-wedge polygons on the Earth. However, the Martian polygons are orders of magnitude larger than these potential Earth analogs, leading to severe mechanical difficulties for genetic models based on simple analogy arguments. Stratigraphic studies show that the polygonally fractured material in Utopia Planitia was deposited on a land surface with significant topography, including scattered knobs and mesas, fragments of ancient crater rims, and fresh younger craters. Sediments or volcanics deposited over topographically irregular surfaces can experience differential compaction producing drape folds. Bending stresses due to these drape folds would be superposed on the pervasive tensile stresses due to desiccation or cooling, such that the probability of fracturing is enhanced above buried topographic highs and suppressed above buried topographic lows. Thus it was proposed that the scale of the Martian polygons is controlled by the spacing of topographic highs on the buried surface rather than by the physics of the shrinkage process.

  17. Quasi-static responses and variational principles in gradient plasticity

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc-Son

    2016-12-01

    Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.

  18. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  19. Trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Mease, Kenneth D.; Vanburen, Mark A.

    1989-01-01

    The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.

  20. Planetary science: Pluto's polygons explained

    NASA Astrophysics Data System (ADS)

    Dombard, Andrew J.; O'Hara, Sean

    2016-06-01

    The Sputnik Planum basin of Pluto contains a sheet of nitrogen ice, the surface of which is divided into irregular polygons tens of kilometres across. Two studies reveal that vigorous convection causes these polygons. See Letters p.79 & 82

  1. Lineament and polygon patterns on Europa

    NASA Technical Reports Server (NTRS)

    Pieri, D. C.

    1981-01-01

    A classification scheme is presented for the lineaments and associated polygonal patterns observed on the surface of Europa, and the frequency distribution of the polygons is discussed in terms of the stress-relief fracturing of the surface. The lineaments are divided on the basis of albedo, morphology, orientation and characteristic geometry into eight groups based on Voyager 2 images taken at a best resolution of 4 km. The lineaments in turn define a system of polygons varying in size from small reticulate patterns the limit of resolution to 1,000,000 sq km individuals. Preliminary analysis of polygon side frequency distributions reveals a class of polygons with statistics similar to those found in complex terrestrial terrains, particularly in areas of well-oriented stresses, a class with similar statistics around the antijovian point, and a class with a distribution similar to those seen in terrestrial tensional fracture patterns. Speculations concerning the processes giving rise to the lineament patterns are presented.

  2. Small-scale martian polygonal terrain: Implications for liquid surface water

    USGS Publications Warehouse

    Seibert, N.M.; Kargel, J.S.

    2001-01-01

    Images from the Mars Orbiter Camera (MOC) through August 1999 were analyzed for the global distribution of small-scale polygonal terrain not clearly resolved in Viking Orbiter imagery. With very few exceptions, small-scale polygonal terrain occurs at middle to high latitudes of the northern and southern hemisphere in Hesperian-age geologic units. The largest concentration of this terrain occurs in the Utopia basin in close association with scalloped depressions (interpreted as thermokarst) and appears to represent an Amazonia event. The morphology and occurence of small polygonal terrain suggest they are either mud desiccation cracks or ice-wedge polygons. Because the small-scale polygons in Utopia and Argyre Planitiae are associated with other cold-climate permafrost or glacial features, an ice-wedge model is preferred for these areas. Both cracking mechanisms work most effectively in water- or ice-rich finegrained material and may imply the seasonal or episodic existence of liquid water at the surface.

  3. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Rechnitzer, A.

    2011-04-01

    In this paper, the elementary moves of the BFACF-algorithm (Aragão de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragão de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  4. Chromatically corrected virtual image visual display. [reducing eye strain in flight simulators

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M., Jr. (Inventor)

    1980-01-01

    An in-line, three element, large diameter, optical display lens is disclosed which has a front convex-convex element, a central convex-concave element, and a rear convex-convex element. The lens, used in flight simulators, magnifies an image presented on a television monitor and, by causing light rays leaving the lens to be in essentially parallel paths, reduces eye strain of the simulator operator.

  5. Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity

    NASA Astrophysics Data System (ADS)

    Briec, Walter; Horvath, Charles

    2008-05-01

    -convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.

  6. The prediction of leaf area index from forest polygons decomposed through the integration of remote sensing, GIS, UNIX, and C

    NASA Astrophysics Data System (ADS)

    Wulder, M. A.

    1998-03-01

    Forest stand data are normally stored in a geographic information system (GIS) on the basis of areas of similar species combinations. Polygons are created based upon species assemblages and given labels relating the percentage of areal coverage by each significant species type within the specified area. As a result, estimation of leaf area index (LAI) from the digital numbers found within GIS-stored polygons lack accuracy as the predictive equations for LAI are normally developed for individual species, not species assemblages. A Landsat TM image was acquired to enable a classification which allows for the decomposition of forest-stand polygons into greater species detail. Knowledge of the actual internal composition of the stand polygons provides for computation of LAI values based upon the appropriate predictive equation resulting in higher accuracy of these estimates. To accomplish this goal it was necessary to extract, for each cover type in each polygon, descriptive values to represent the digital numbers located in that portion of the polygon. The classified image dictates the species composition of the various portions of the polygon and within these areas the raster pixel values are tabulated and averaged. Due to a lack of existing software tools to assess the raster values occurring within GIS polygons a combination of remote sensing, GIS, UNIX, and specifically coded C programs were necessary. Such tools are frequently used by the spatial analyst and indicate the complexity of what may appear to be a straight-forward spatial analysis problem.

  7. Numerical investigations of microtopographic influence on the near surface thermal regime and thermokarst development in ice wedge polygons

    NASA Astrophysics Data System (ADS)

    Abolt, C.; Young, M.; Atchley, A. L.; Harp, D. R.

    2017-12-01

    Permafrost degradation in ice wedge polygon terrain has accelerated in the last three decades, resulting in drastic changes to tundra hydrology which may impact rates of soil organic carbon mobilization. The goal of this research is to determine to what extent the near surface thermal regime, and hence the vulnerability of the upper permafrost, may be controlled by surface topography in ice wedge polygons. The central hypothesis is that energy is preferentially transferred into the polygon subsurface in summer at low, wet zones (such as low-centered polygon centers and troughs), then released to the atmosphere in winter through elevated zones (such as rims) that are less insulated by snowpack. Disturbance to the approximate balance between these seasonal energy fluxes may help explain the onset and development of thermokarst. In this work, we present a numerical model of thermal hydrology in a low-centered polygon near Prudhoe Bay, Alaska, constructed within the Advanced Terrestrial Simulator, a state of the art code that couples a meteorologically driven surface energy balance with equations for surface and subsurface conservation of mass and energy. The model is calibrated against a year of daily ground temperature observations throughout the polygon and used to quantify meter-scale zonation in the subsurface thermal budget. The amount of relief in the rims and the trough of the simulated polygon is then manipulated, and simulations are repeated including a pulse of one warm year, to explore the extent to which topography may influence the response of permafrost to increased air temperatures. Results suggest that nearly 25% of energy entering the ground at the polygon center during summer may be released back to the atmosphere through the rims in winter, producing a modest effect on active layer thickness throughout the polygon. Simulated polygons with deeper, wetter troughs have only marginally thicker active layers than other polygons in average years, but are the most vulnerable to additional permafrost degradation during warm summers. The results confirm and expand upon current conceptual understanding of positive feedbacks during thermokarst development, and are compatible with historical observations indicating that ice wedge degradation tends to occur in discrete pulses, rather than as a gradual process.

  8. Scoliosis convexity and organ anatomy are related.

    PubMed

    Schlösser, Tom P C; Semple, Tom; Carr, Siobhán B; Padley, Simon; Loebinger, Michael R; Hogg, Claire; Castelein, René M

    2017-06-01

    Primary ciliary dyskinesia (PCD) is a respiratory syndrome in which 'random' organ orientation can occur; with approximately 46% of patients developing situs inversus totalis at organogenesis. The aim of this study was to explore the relationship between organ anatomy and curve convexity by studying the prevalence and convexity of idiopathic scoliosis in PCD patients with and without situs inversus. Chest radiographs of PCD patients were systematically screened for existence of significant lateral spinal deviation using the Cobb angle. Positive values represented right-sided convexity. Curve convexity and Cobb angles were compared between PCD patients with situs inversus and normal anatomy. A total of 198 PCD patients were screened. The prevalence of scoliosis (Cobb >10°) and significant spinal asymmetry (Cobb 5-10°) was 8 and 23%, respectively. Curve convexity and Cobb angle were significantly different within both groups between situs inversus patients and patients with normal anatomy (P ≤ 0.009). Moreover, curve convexity correlated significantly with organ orientation (P < 0.001; ϕ = 0.882): In 16 PCD patients with scoliosis (8 situs inversus and 8 normal anatomy), except for one case, matching of curve convexity and orientation of organ anatomy was observed: convexity of the curve was opposite to organ orientation. This study supports our hypothesis on the correlation between organ anatomy and curve convexity in scoliosis: the convexity of the thoracic curve is predominantly to the right in PCD patients that were 'randomized' to normal organ anatomy and to the left in patients with situs inversus totalis.

  9. Supramolecule-to-supramolecule transformations of coordination-driven self-assembled polygons.

    PubMed

    Zhao, Liang; Northrop, Brian H; Stang, Peter J

    2008-09-10

    Two types of supramolecular transformations, wherein a self-assembled Pt(II)-pyridyl metal-organic polygon is controllably converted into an alternative polygon, have been achieved through the reaction between cobalt carbonyl and the acetylene moiety of a dipyridyl donor ligand. A [6 + 6] hexagon is transformed into two [3 + 3] hexagons, and a triangle-square mixture is converted into [2 + 2] rhomboids. 1H and 31P NMR spectra are used to track the transformation process and evaluate the yield of new self-assembled polygons. Such transformed species are identified by electrospray ionization (ESI) mass spectrometry. This new kind of supramolecule-to-supramolecule transformations provides a viable means for constructing, and then converting, new self-assembled polygons.

  10. Use of Convexity in Ostomy Care

    PubMed Central

    Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel

    2017-01-01

    Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes. PMID:28002174

  11. Geometric convex cone volume analysis

    NASA Astrophysics Data System (ADS)

    Li, Hsiao-Chi; Chang, Chein-I.

    2016-05-01

    Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.

  12. The effects of a convex rear-view mirror on ocular accommodative responses.

    PubMed

    Nagata, Tatsuo; Iwasaki, Tsuneto; Kondo, Hiroyuki; Tawara, Akihiko

    2013-11-01

    Convex mirrors are universally used as rear-view mirrors in automobiles. However, the ocular accommodative responses during the use of these mirrors have not yet been examined. This study investigated the effects of a convex mirror on the ocular accommodative systems. Seven young adults with normal visual functions were ordered to binocularly watch an object in a convex or plane mirror. The accommodative responses were measured with an infrared optometer. The average of the accommodation of all subjects while viewing the object in the convex mirror were significantly nearer than in the plane mirror, although all subjects perceived the position of the object in the convex mirror as being farther away. Moreover, the fluctuations of accommodation were significantly larger for the convex mirror. The convex mirror caused the 'false recognition of distance', which induced the large accommodative fluctuations and blurred vision. Manufactures should consider the ocular accommodative responses as a new indicator for increasing automotive safety. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Pluto's Polygonal Terrain Places Lower Limit on Planetary Heat Flow

    NASA Astrophysics Data System (ADS)

    Trowbridge, A.; Steckloff, J. K.; Melosh, H., IV; Freed, A. M.

    2015-12-01

    During its recent flyby of Pluto, New Horizons imaged an icy plains region (Sputnik Planum) whose surface is divided into polygonal blocks, ca. 20-30 km across, bordered by what appear to be shallow troughs. The lack of craters within these plains suggests they are relatively young, implying that the underlying material is recently active. The scale of these features argues against an origin by cooling and contraction. Here we investigate the alternative scenario that they are the surface manifestation of shallow convection in a thick layer of nitrogen ice. Typical Rayleigh-Bernard convective cells are approximately three times wider than the depth of the convecting layer, implying a layer depth of ca. 7-10 km. Our convection hypothesis requires that the Rayleigh number exceed a minimum of about 1000 in the nitrogen ice layer. We coupled a parameterized convection model with a temperature dependent rheology of nitrogen ice (Yamashita, 2008), finding a Rayleigh number 1500 to 7500 times critical for a plausible range of heat flows for Pluto's interior. The computed range of heat flow (3.5-5.2 mW/m2) is consistent with the radiogenic heat generated by a carbonaceous chondrite (CC) core implied by Pluto's bulk density. The minimum heat flow at the critical Rayleigh number is 0.13 mW/m2. Our model implies a core temperature of 44 K in the interior of the convecting layer. This is very close to the exothermic β-α phase transition in nitrogen ice at 35.6 K (for pure N2 ice; dissolved CO can increase this, depending on its concentration), suggesting that the warm cores of the rising convective cells may be β phase, whereas the cooler sinking limbs may be α phase. This transition may thus be observable due to the large difference in their spectral signature. Further applying our model to Pluto's putative water ice mantle, the heat flow from CC is consistent with convection in Pluto's mantle and the activity observed on its surface.

  14. Polygonal deformation bands in sandstone

    NASA Astrophysics Data System (ADS)

    Antonellini, Marco; Nella Mollema, Pauline

    2017-04-01

    We report for the first time the occurrence of polygonal faults in sandstone, which is compelling given that layer-bound polygonal fault systems have been observed so far only in fine-grained sediments such as clay and chalk. The polygonal faults are dm-wide zones of shear deformation bands that developed under shallow burial conditions in the lower portion of the Jurassic Entrada Fm (Utah, USA). The edges of the polygons are 1 to 5 meters long. The shear deformation bands are organized as conjugate faults along each edge of the polygon and form characteristic horst-like structures. The individual deformation bands have slip magnitudes ranging from a few mm to 1.5 cm; the cumulative average slip magnitude in a zone is up to 10 cm. The deformation bands heaves, in aggregate form, accommodate a small isotropic horizontal extension (strain < 0.005). The individual shear deformation bands show abutting T-junctions, veering, curving, and merging where they mechanically interact. Crosscutting relationships are rare. The interactions of the deformation bands are similar to those of mode I opening fractures. Density inversion, that takes place where under-compacted and over-pressurized layers (Carmel Fm) lay below normally compacted sediments (Entrada Sandstone), may be an important process for polygonal deformation bands formation. The gravitational sliding and soft sediment structures typically observed within the Carmel Fm support this hypothesis. Soft sediment deformation may induce polygonal faulting in the section of the Entrada Sandstone just above the Carmel Fm. The permeability of the polygonal deformation bands is approximately 10-14 to 10-13 m2, which is less than the permeability of the host, Entrada Sandstone (range 10-12 to 10-11 m2). The documented fault networks have important implications for evaluating the geometry of km-scale polygonal fault systems in the subsurface, top seal integrity, as well as constraining paleo-tectonic stress regimes.

  15. Water polygons in high-resolution protein crystal structures.

    PubMed

    Lee, Jonas; Kim, Sung-Hou

    2009-07-01

    We have analyzed the interstitial water (ISW) structures in 1500 protein crystal structures deposited in the Protein Data Bank that have greater than 1.5 A resolution with less than 90% sequence similarity with each other. We observed varieties of polygonal water structures composed of three to eight water molecules. These polygons may represent the time- and space-averaged structures of "stable" water oligomers present in liquid water, and their presence as well as relative population may be relevant in understanding physical properties of liquid water at a given temperature. On an average, 13% of ISWs are localized enough to be visible by X-ray diffraction. Of those, averages of 78% are water molecules in the first water layer on the protein surface. Of the localized ISWs beyond the first layer, almost half of them form water polygons such as trigons, tetragons, as well as expected pentagons, hexagons, higher polygons, partial dodecahedrons, and disordered networks. Most of the octagons and nanogons are formed by fusion of smaller polygons. The trigons are most commonly observed. We suggest that our observation provides an experimental basis for including these water polygon structures in correlating and predicting various water properties in liquid state.

  16. Water polygons in high-resolution protein crystal structures

    PubMed Central

    Lee, Jonas; Kim, Sung-Hou

    2009-01-01

    We have analyzed the interstitial water (ISW) structures in 1500 protein crystal structures deposited in the Protein Data Bank that have greater than 1.5 Å resolution with less than 90% sequence similarity with each other. We observed varieties of polygonal water structures composed of three to eight water molecules. These polygons may represent the time- and space-averaged structures of “stable” water oligomers present in liquid water, and their presence as well as relative population may be relevant in understanding physical properties of liquid water at a given temperature. On an average, 13% of ISWs are localized enough to be visible by X-ray diffraction. Of those, averages of 78% are water molecules in the first water layer on the protein surface. Of the localized ISWs beyond the first layer, almost half of them form water polygons such as trigons, tetragons, as well as expected pentagons, hexagons, higher polygons, partial dodecahedrons, and disordered networks. Most of the octagons and nanogons are formed by fusion of smaller polygons. The trigons are most commonly observed. We suggest that our observation provides an experimental basis for including these water polygon structures in correlating and predicting various water properties in liquid state. PMID:19551896

  17. Aberdeen polygons: computer displays of physiological profiles for intensive care.

    PubMed

    Green, C A; Logie, R H; Gilhooly, K J; Ross, D G; Ronald, A

    1996-03-01

    The clinician in an intensive therapy unit is presented regularly with a range of information about the current physiological state of the patients under care. This information typically comes from a variety of sources and in a variety of formats. A more integrated form of display incorporating several physiological parameters may be helpful therefore. Three experiments are reported that explored the potential use of analogue, polygon diagrams to display physiological data from patients undergoing intensive therapy. Experiment 1 demonstrated that information can be extracted readily from such diagrams comprising 8- or 10-sided polygons, but with an advantage for simpler polygons and for information displayed at the top of the diagram. Experiment 2 showed that colour coding removed these biases for simpler polygons and the top of the diagram, together with speeding the processing time. Experiment 3 used polygons displaying patterns of physiological data that were consistent with typical conditions observed in the intensive care unit. It was found that physicians can readily learn to recognize these patterns and to diagnose both the nature and severity of the patient's physiological state. These polygon diagrams appear to have some considerable potential for use in providing on-line summary information of a patient's physiological state.

  18. Thermal convection in three-dimensional fractured porous media

    NASA Astrophysics Data System (ADS)

    Mezon, C.; Mourzenko, V. V.; Thovert, J.-F.; Antoine, R.; Fontaine, F.; Finizola, A.; Adler, P. M.

    2018-01-01

    Thermal convection is numerically computed in three-dimensional (3D) fluid saturated isotropically fractured porous media. Fractures are randomly inserted as two-dimensional (2D) convex polygons. Flow is governed by Darcy's 2D and 3D laws in the fractures and in the porous medium, respectively; exchanges take place between these two structures. Results for unfractured porous media are in agreement with known theoretical predictions. The influence of parameters such as the fracture aperture (or fracture transmissivity) and the fracture density on the heat released by the whole system is studied for Rayleigh numbers up to 150 in cubic boxes with closed-top conditions. Then, fractured media are compared to homogeneous porous media with the same macroscopic properties. Three major results could be derived from this study. The behavior of the system, in terms of heat release, is determined as a function of fracture density and fracture transmissivity. First, the increase in the output flux with fracture density is linear over the range of fracture density tested. Second, the increase in output flux as a function of fracture transmissivity shows the importance of percolation. Third, results show that the effective approach is not always valid, and that the mismatch between the full calculations and the effective medium approach depends on the fracture density in a crucial way.

  19. Broad Band Intra-Cavity Total Reflection Chemical Sensor

    DOEpatents

    Pipino, Andrew C. R.

    1998-11-10

    A broadband, ultrahigh-sensitivity chemical sensor is provided that allows etection through utilization of a small, extremely low-loss, monolithic optical cavity. The cavity is fabricated from highly transparent optical material in the shape of a regular polygon with one or more convex facets to form a stable resonator for ray trajectories sustained by total internal reflection. Optical radiation enters and exits the monolithic cavity by photon tunneling in which two totally reflecting surfaces are brought into close proximity. In the presence of absorbing material, the loss per pass is increased since the evanescent waves that exist exterior to the cavity at points where the circulating pulse is totally reflected, are absorbed. The decay rate of an injected pulse is determined by coupling out an infinitesimal fraction of the pulse to produce an intensity-versus-time decay curve. Since the change in the decay rate resulting from absorption is inversely proportional to the magnitude of absorption, a quantitative sensor of concentration or absorption cross-section with 1 part-per-million/pass or better sensitivity is obtained. The broadband nature of total internal reflection permits a single device to be used over a broad wavelength range. The absorption spectrum of the surrounding medium can thereby be obtained as a measurement of inverse decay time as a function of wavelength.

  20. Cooperative terrain model acquisition by a team of two or three point-robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, N.S.V.; Protopopescu, V.; Manickam, N.

    1996-04-01

    We address the model acquisition problem for an unknown planar terrain by a team of two or three robots. The terrain is cluttered by a finite number of polygonal obstacles whose shapes and positions are unknown. The robots are point-sized and equipped with visual sensors which acquire all visible parts of the terrain by scan operations executed from their locations. The robots communicate with each other via wireless connection. The performance is measured by the number of the sensor (scan) operations which are assumed to be the most time-consuming of all the robot operations. We employ the restricted visibility graphmore » methods in a hierarchical setup. For terrains with convex obstacles and for teams of n(= 2, 3) robots, we prove that the sensing time is reduced by a factor of 1/n. For terrains with concave corners, the performance of the algorithm depends on the number of concave regions and their depths. A hierarchical decomposition of the restricted visibility graph into n-connected and (n - 1)-or-less connected components is considered. The performance for the n(= 2, 3) robot team is expressed in terms of the sizes of n-connected components, and the sizes and diameters of (n - 1)-or-less connected components.« less

  1. A study of the microstructure of a rapidly solidified nickel-base superalloy modified with boron. M.S. Thesis. Final Contractor Report

    NASA Technical Reports Server (NTRS)

    Speck, J. S.

    1986-01-01

    The microstructures of melt-spun superalloy ribbons with variable boron levels have been studied by transmission electron microscopy. The base alloy was of approximate composition Ni-11% Cr-5%Mo-5%Al-4%Ti with boron levels of 0.06, 0.12, and 0.60 percent (all by weight). Thirty micron thick ribbons display an equiaxed chill zone near the wheel contact side which develops into primary dendrite arms in the ribbon center. Secondary dendrite arms are observed near the ribbon free surface. In the higher boron bearing alloys, boride precipitates are observed along grain boundaries. A concerted effort has been made to elucidate true grain shapes by the use of bright field/dark field microscopy. In the low boron alloy, grain shapes are often convex, and grain faces are flat. Boundary faces frequently have large curvature, and grain shapes form concave polygons in the higher boron level alloys. It is proposed that just after solidification, in all of the alloys studied, grain shapes were initially concave and boundaries were wavy. Boundary straightening is presumed to occur on cooling in the low boron alloy. Boundary migration is precluded in the higher boron alloys by fast precipitation of borides at internal interfaces.

  2. Revisiting separation properties of convex fuzzy sets

    USDA-ARS?s Scientific Manuscript database

    Separation of convex sets by hyperplanes has been extensively studied on crisp sets. In a seminal paper separability and convexity are investigated, however there is a flaw on the definition of degree of separation. We revisited separation on convex fuzzy sets that have level-wise (crisp) disjointne...

  3. Technique for identifying, tracing, or tracking objects in image data

    DOEpatents

    Anderson, Robert J [Albuquerque, NM; Rothganger, Fredrick [Albuquerque, NM

    2012-08-28

    A technique for computer vision uses a polygon contour to trace an object. The technique includes rendering a polygon contour superimposed over a first frame of image data. The polygon contour is iteratively refined to more accurately trace the object within the first frame after each iteration. The refinement includes computing image energies along lengths of contour lines of the polygon contour and adjusting positions of the contour lines based at least in part on the image energies.

  4. Use of Convexity in Ostomy Care: Results of an International Consensus Meeting.

    PubMed

    Hoeflok, Jo; Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel

    Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes.

  5. On the mean and variance of the writhe of random polygons.

    PubMed

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  6. Isotopic insights into methane production, oxidation, and emissions in Arctic polygon tundra.

    PubMed

    Vaughn, Lydia J S; Conrad, Mark E; Bill, Markus; Torn, Margaret S

    2016-10-01

    Arctic wetlands are currently net sources of atmospheric CH4 . Due to their complex biogeochemical controls and high spatial and temporal variability, current net CH4 emissions and gross CH4 processes have been difficult to quantify, and their predicted responses to climate change remain uncertain. We investigated CH4 production, oxidation, and surface emissions in Arctic polygon tundra, across a wet-to-dry permafrost degradation gradient from low-centered (intact) to flat- and high-centered (degraded) polygons. From 3 microtopographic positions (polygon centers, rims, and troughs) along the permafrost degradation gradient, we measured surface CH4 and CO2 fluxes, concentrations and stable isotope compositions of CH4 and DIC at three depths in the soil, and soil moisture and temperature. More degraded sites had lower CH4 emissions, a different primary methanogenic pathway, and greater CH4 oxidation than did intact permafrost sites, to a greater degree than soil moisture or temperature could explain. Surface CH4 flux decreased from 64 nmol m(-2)  s(-1) in intact polygons to 7 nmol m(-2)  s(-1) in degraded polygons, and stable isotope signatures of CH4 and DIC showed that acetate cleavage dominated CH4 production in low-centered polygons, while CO2 reduction was the primary pathway in degraded polygons. We see evidence that differences in water flow and vegetation between intact and degraded polygons contributed to these observations. In contrast to many previous studies, these findings document a mechanism whereby permafrost degradation can lead to local decreases in tundra CH4 emissions. © 2016 John Wiley & Sons Ltd.

  7. On the mean and variance of the writhe of random polygons

    PubMed Central

    Portillo, J.; Diao, Y.; Scharein, R.; Arsuaga, J.; Vazquez, M.

    2013-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an “ideal” conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon. PMID:25685182

  8. An investigation into the mechanism of the polygonal wear of metro train wheels and its effect on the dynamic behaviour of a wheel/rail system

    NASA Astrophysics Data System (ADS)

    Jin, Xuesong; Wu, Lei; Fang, Jianying; Zhong, Shuoqiao; Ling, Liang

    2012-12-01

    This paper presents a detailed investigation conducted into the mechanism of the polygonal wear of metro train wheels through extensive experiments conducted at the sites. The purpose of the experimental investigation is to determine from where the resonant frequency that causes the polygonal wear of the metro train wheels originates. The experiments include the model tests of a vehicle and its parts and the tracks, the dynamic behaviour test of the vehicle in operation and the observation test of the polygonal wear development of the wheels. The tracks tested include the viaducts and the tunnel tracks. The structure model tests show that the average passing frequency of a polygonal wheel is approximately close to the first bending resonant frequency of the wheelset that is found by the wheelset model test and verified by the finite element analysis of the wheelset. Also, the dynamic behaviour test of the vehicle in operation indicates the main frequencies of the vertical acceleration vibration of the axle boxes, which are dominant in the vertical acceleration vibration of the axle boxes and close to the passing frequency of a polygonal wheel, which shows that the first bending resonant frequency of the wheelset is very exciting in the wheelset operation. The observation test of the polygonal wear development of the wheels indicates an increase in the rate of the polygonal wear of the wheels after their re-profiling. This paper also describes the dynamic models used for the metro vehicle coupled with the ballasted track and the slab track to analyse the effect of the polygonal wear of the wheels on the wheel/rail normal forces.

  9. High Latitude Polygons

    NASA Technical Reports Server (NTRS)

    2005-01-01

    26 September 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows polygonal patterned ground on a south high-latitude plain. The outlines of the polygons, like the craters and hills in this region, are somewhat enhanced by the presence of bright frost left over from the previous winter. On Earth, polygons at high latitudes would usually be attributed to the seasonal freezing and thawing cycles of ground ice. The origin of similar polygons on Mars is less certain, but might also be an indicator of ground ice.

    Location near: 75.3oS, 113.2oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Spring

  10. Supramolecule-to-Supramolecule Transformations of Coordination-Driven Self-Assembled Polygons

    PubMed Central

    Zhao, Liang; Northrop, Brian H.; Stang, Peter J.

    2009-01-01

    Two types of supramolecular transformations, wherein a self-assembled Pt(II)-pyridyl metal-organic polygon is controllably converted into an alternative polygon, have been achieved through the reaction between cobalt carbonyl and the acetylene moiety of a dipyridyl donor ligand. A [6+6] hexagon is transformed into two [3+3] hexagons and a triangle-square mixture is converted into [2+2] rhomboids. 1H and 31P NMR spectra are used to track the transformation process and evaluate the yield of new self-assembled polygons. Such transformed species are identified by electrospray ionization (ESI) mass spectrometry. This new kind of supramolecule-to-supramolecule transformations provides a viable means for constructing, and then converting, new self-assembled polygons. PMID:18702485

  11. Average size of random polygons with fixed knot topology.

    PubMed

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  12. Acid-Base Titration Curves of Soils from a Low-Centered Polygon, Barrow, Alaska, 2013

    DOE Data Explorer

    Jianqiu Zheng; David Graham

    2017-12-05

    This dataset provides pH titration data of soils from a low-centered polygon center. The soil core was collected in 2013 from a low-centered polygon center from the NGEE-Arctic Intensive Study Site 1, Barrow, Alaska.

  13. CONNECTICUT GROUND WATER QUALITY CLASSIFICATIONS

    EPA Science Inventory

    This is a 1:24,000-scale datalayer of Ground Water Quality Classifications in Connecticut. It is a polygon Shapefile that includes polygons for GA, GAA, GAAs, GB, GC and other related ground water quality classes. Each polygon is assigned a ground water quality class, which is s...

  14. Ice Wedge Polygon Bromide Tracer Experiment in Subsurface Flow, Barrow, Alaska, 2015-2016

    DOE Data Explorer

    Nathan Wales

    2018-02-15

    Time series of bromide tracer concentrations at several points within a low-centered polygon and a high-centered polygon. Concentration values were obtained from the analysis of water samples via ion chromatography with an accuracy of 0.01 mg/l.

  15. Partial polygon pruning of hydrographic features in automated generalization

    USGS Publications Warehouse

    Stum, Alexander K.; Buttenfield, Barbara P.; Stanislawski, Larry V.

    2017-01-01

    This paper demonstrates a working method to automatically detect and prune portions of waterbody polygons to support creation of a multi-scale hydrographic database. Water features are known to be sensitive to scale change; and thus multiple representations are required to maintain visual and geographic logic at smaller scales. Partial pruning of polygonal features—such as long and sinuous reservoir arms, stream channels that are too narrow at the target scale, and islands that begin to coalesce—entails concurrent management of the length and width of polygonal features as well as integrating pruned polygons with other generalized point and linear hydrographic features to maintain stream network connectivity. The implementation follows data representation standards developed by the U.S. Geological Survey (USGS) for the National Hydrography Dataset (NHD). Portions of polygonal rivers, streams, and canals are automatically characterized for width, length, and connectivity. This paper describes an algorithm for automatic detection and subsequent processing, and shows results for a sample of NHD subbasins in different landscape conditions in the United States.

  16. Exposed Ice in the Northern Mid-Latitudes of Mars

    NASA Technical Reports Server (NTRS)

    Allen, Carlton C.

    2007-01-01

    Ice-Rich Layer: Polygonal features with dimensions of approximately 100 meters, bounded by cracks, are commonly observed on the martian northern plains. These features are generally attributed to thermal cracking of ice-rich sediments, in direct analogy to polygons in terrestrial polar regions. We mapped polygons in the northern mid-latitudes (30 to 65 N) using MOC and HiRISE images. Polygons are scattered across the northern plains, with a particular concentration in western Utopia Planitia. This region largely overlaps the Late Amazonian Astapus Colles unit, characterized by polygonal terrain and nested pits consistent with periglacial and thermokarst origins. Bright and Dark Polygonal Cracks: An examination of all MOC images (1997 through 2003) covering the study area demonstrated that, at latitudes of 55 to 65 N, most of the imaged polygons show bright bounding cracks. We interpret these bright cracks as exposed ice. Between 40 and 55 N, most of the imaged polygons show dark bounding cracks. These are interpreted as polygons from which the exposed ice has been removed by sublimation. The long-term stability limit for exposed ice, even in deep cracks, apparently lies near 55 N. Bright and Dark Spots: Many HiRISE and MOC frames showing polygons in the northern plains also show small numbers of bright and dark spots, particularly in western Utopia Planitia. Many of the spots are closely associated with collapse features suggestive of thermokarst. The spots range from tens to approximately 100 meters in diameter. The bright spots are interpreted as exposed ice, due to their prevalence on terrain mapped as ice rich. The dark spots are interpreted as former bright spots, which have darkened as the exposed ice is lost by sublimation. The bright spots may be the martian equivalents of pingos, ice-cored mounds found in periglacial regions on Earth. Terrestrial pingos from which the ice core has melted often collapse to form depressions similar to the martian dark spots. Future Observations: The SHARAD radar should be able to confirm the presence and measure the depth of the interpreted ice-rich layer that forms the Astapus Colles unit. If this layer is confirmed it will strengthen the interpretation of bright polygon cracks and bright spots as exposed ice. HiRISE images of the northern plains are showing unprecedented details of the polygonal cracks. Future HiRISE images that include bright spots, compared to MOC images taken years earlier, will illustrate the temporal stability of the spots. The CRISM spectrometer, with multiple spectral bands and a spatial resolution around 20 meters, should allow mineralogical identification of the material exposed in the polygonal bounding cracks and in the bright spots.

  17. Airfoil

    DOEpatents

    Ristau, Neil; Siden, Gunnar Leif

    2015-07-21

    An airfoil includes a leading edge, a trailing edge downstream from the leading edge, a pressure surface between the leading and trailing edges, and a suction surface between the leading and trailing edges and opposite the pressure surface. A first convex section on the suction surface decreases in curvature downstream from the leading edge, and a throat on the suction surface is downstream from the first convex section. A second convex section is on the suction surface downstream from the throat, and a first convex segment of the second convex section increases in curvature.

  18. Experimental investigation into the mechanism of the polygonal wear of electric locomotive wheels

    NASA Astrophysics Data System (ADS)

    Tao, Gongquan; Wang, Linfeng; Wen, Zefeng; Guan, Qinghua; Jin, Xuesong

    2018-06-01

    Experiments were conducted at field sites to investigate the mechanism of the polygonal wear of electric locomotive wheels. The polygonal wear rule of electric locomotive wheels was obtained. Moreover, two on-track tests have been carried out to investigate the vibration characteristics of the electric locomotive's key components. The measurement results of wheels out-of-round show that most electric locomotive wheels exhibit polygonal wear. The main centre wavelength in the 1/3 octave bands is 200 mm and/or 160 mm. The test results of vibration characteristics indicate that the dominating frequency of the vertical acceleration measured on the axle box is approximately equal to the passing frequency of a polygonal wheel, and does not vary with the locomotive speed during the acceleration course. The wheelset modal analysis using the finite element method (FEM) indicates that the first bending resonant frequency of the wheelset is quite close to the main vibration frequency of the axle box. The FEM results are verified by the experimental modal analysis of the wheelset. Moreover, different plans were designed to verify whether the braking system and the locomotive's adhesion control have significant influence on the wheel polygon or not. The test results indicate that they are not responsible for the initiation of the wheel polygon. The first bending resonance of the wheelset is easy to be excited in the locomotive operation and it is the root cause of wheel polygon with centre wavelength of 200 mm in the 1/3 octave bands.

  19. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  20. Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr

    2016-04-18

    The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.

  1. A Bayesian observer replicates convexity context effects in figure-ground perception.

    PubMed

    Goldreich, Daniel; Peterson, Mary A

    2012-01-01

    Peterson and Salvagio (2008) demonstrated convexity context effects in figure-ground perception. Subjects shown displays consisting of unfamiliar alternating convex and concave regions identified the convex regions as foreground objects progressively more frequently as the number of regions increased; this occurred only when the concave regions were homogeneously colored. The origins of these effects have been unclear. Here, we present a two-free-parameter Bayesian observer that replicates convexity context effects. The Bayesian observer incorporates two plausible expectations regarding three-dimensional scenes: (1) objects tend to be convex rather than concave, and (2) backgrounds tend (more than foreground objects) to be homogeneously colored. The Bayesian observer estimates the probability that a depicted scene is three-dimensional, and that the convex regions are figures. It responds stochastically by sampling from its posterior distributions. Like human observers, the Bayesian observer shows convexity context effects only for images with homogeneously colored concave regions. With optimal parameter settings, it performs similarly to the average human subject on the four display types tested. We propose that object convexity and background color homogeneity are environmental regularities exploited by human visual perception; vision achieves figure-ground perception by interpreting ambiguous images in light of these and other expected regularities in natural scenes.

  2. Tumor segmentation of multi-echo MR T2-weighted images with morphological operators

    NASA Astrophysics Data System (ADS)

    Torres, W.; Martín-Landrove, M.; Paluszny, M.; Figueroa, G.; Padilla, G.

    2009-02-01

    In the present work an automatic brain tumor segmentation procedure based on mathematical morphology is proposed. The approach considers sequences of eight multi-echo MR T2-weighted images. The relaxation time T2 characterizes the relaxation of water protons in the brain tissue: white matter, gray matter, cerebrospinal fluid (CSF) or pathological tissue. Image data is initially regularized by the application of a log-convex filter in order to adjust its geometrical properties to those of noiseless data, which exhibits monotonously decreasing convex behavior. Finally the regularized data is analyzed by means of an 8-dimensional morphological eccentricity filter. In a first stage, the filter was used for the spatial homogenization of the tissues in the image, replacing each pixel by the most representative pixel within its structuring element, i.e. the one which exhibits the minimum total distance to all members in the structuring element. On the filtered images, the relaxation time T2 is estimated by means of least square regression algorithm and the histogram of T2 is determined. The T2 histogram was partitioned using the watershed morphological operator; relaxation time classes were established and used for tissue classification and segmentation of the image. The method was validated on 15 sets of MRI data with excellent results.

  3. Fractional Programming for Communication Systems—Part I: Power Control and Beamforming

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

  4. A Periglacial Analog for Landforms in Gale Crater, Mars

    NASA Technical Reports Server (NTRS)

    Oehler, Dorothy Z.

    2013-01-01

    Several features in a high thermal inertia (TI) unit at Gale crater can be interpreted within a periglacial framework. These features include polygonally fractured terrain (cf. ice-wedge polygons), circumferential patterns of polygonal fractures (cf. relict pingos with ice-wedge polygons on their surfaces), irregularly-shaped and clustered depressions (cf. remnants of collapsed pingos and ephemeral lakes), and a general hummocky topography (cf. thermokarst). This interpretation would imply a major history of water and ice in Gale crater, involving permafrost, freeze-thaw cycles, and perhaps ponded surface water.

  5. Morphometric analysis of polygonal cracking patterns in desiccated starch slurries

    NASA Astrophysics Data System (ADS)

    Akiba, Yuri; Magome, Jun; Kobayashi, Hiroshi; Shima, Hiroyuki

    2017-08-01

    We investigate the geometry of two-dimensional polygonal cracking that forms on the air-exposed surface of dried starch slurries. Two different kinds of starches, made from potato and corn, exhibited distinguished crack evolution, and there were contrasting effects of slurry thickness on the probability distribution of the polygonal cell area. The experimental findings are believed to result from the difference in the shape and size of starch grains, which strongly influence the capillary transport of water and tensile stress field that drives the polygonal cracking.

  6. Conformal array design on arbitrary polygon surface with transformation optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Li, E-mail: dengl@bupt.edu.cn; Hong, Weijun, E-mail: hongwj@bupt.edu.cn; Zhu, Jianfeng

    2016-06-15

    A transformation-optics based method to design a conformal antenna array on an arbitrary polygon surface is proposed and demonstrated in this paper. This conformal antenna array can be adjusted to behave equivalently as a uniformly spaced linear array by applying an appropriate transformation medium. An typical example of general arbitrary polygon conformal arrays, not limited to circular array, is presented, verifying the proposed approach. In summary, the novel arbitrary polygon surface conformal array can be utilized in array synthesis and beam-forming, maintaining all benefits of linear array.

  7. Comic image understanding based on polygon detection

    NASA Astrophysics Data System (ADS)

    Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong

    2013-01-01

    Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.

  8. Zernike-like systems in polygons and polygonal facets.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2015-07-20

    Zernike polynomials are commonly used to represent the wavefront phase on circular optical apertures, since they form a complete and orthonormal basis on the unit disk. In [Opt. Lett.32, 74 (2007)10.1364/OL.32.000074OPLEDP0146-9592] we introduced a new Zernike basis for elliptic and annular optical apertures based on an appropriate diffeomorphism between the unit disk and the ellipse and the annulus. Here, we present a generalization of this Zernike basis for a variety of important optical apertures, paying special attention to polygons and the polygonal facets present in segmented mirror telescopes. On the contrary to ad hoc solutions, most of them based on the Gram-Smith orthonormalization method, here we consider a piecewise diffeomorphism that transforms the unit disk into the polygon under consideration. We use this mapping to define a Zernike-like orthonormal system over the polygon. We also consider ensembles of polygonal facets that are essential in the design of segmented mirror telescopes. This generalization, based on in-plane warping of the basis functions, provides a unique solution, and what is more important, it guarantees a reasonable level of invariance of the mathematical properties and the physical meaning of the initial basis functions. Both the general form and the explicit expressions for a typical example of telescope optical aperture are provided.

  9. Vigorous convection as the explanation for Pluto's polygonal terrain.

    PubMed

    Trowbridge, A J; Melosh, H J; Steckloff, J K; Freed, A M

    2016-06-02

    Pluto's surface is surprisingly young and geologically active. One of its youngest terrains is the near-equatorial region informally named Sputnik Planum, which is a topographic basin filled by nitrogen (N2) ice mixed with minor amounts of CH4 and CO ices. Nearly the entire surface of the region is divided into irregular polygons about 20-30 kilometres in diameter, whose centres rise tens of metres above their sides. The edges of this region exhibit bulk flow features without polygons. Both thermal contraction and convection have been proposed to explain this terrain, but polygons formed from thermal contraction (analogous to ice-wedges or mud-crack networks) of N2 are inconsistent with the observations on Pluto of non-brittle deformation within the N2-ice sheet. Here we report a parameterized convection model to compute the Rayleigh number of the N2 ice and show that it is vigorously convecting, making Rayleigh-Bénard convection the most likely explanation for these polygons. The diameter of Sputnik Planum's polygons and the dimensions of the 'floating mountains' (the hills of of water ice along the edges of the polygons) suggest that its N2 ice is about ten kilometres thick. The estimated convection velocity of 1.5 centimetres a year indicates a surface age of only around a million years.

  10. Evidence for an Ancient Periglacial Climate in Gale Crater, Mars

    NASA Astrophysics Data System (ADS)

    Fairén, A. G.; Oehler, D. Z.; Mangold, N.; Hallet, B.; Le Deit, L.; Williams, A.; Sletten, R. S.; Martínez-Frías, J.

    2016-12-01

    Decameter-scale polygons occur extensively in the lower Peace Vallis Fan of Gale crater, in the Bedded Fractured (BF) Unit, north of Yellowknife Bay (YKB) that was examined and drilled by the Curiosity rover. To gain insight into the origin of these polygons, we studied image data from the Context (CTX) and High Resolution Imaging Science Experiment (HiRISE) cameras on Mars Reconnaissance Orbiter and compared results to the geology of the fan. The polygons are 4 to 30 m across, square to rectangular, and defined by 0.5 to 4 m-wide linear troughs that probably reflect cm-wide, quasi-vertical fractures below the surface. Polygon networks are typically orthogonal systems, with occasional circularly organized patterns, hundreds of meters across. We evaluated multiple hypotheses for the origin of the polygons and concluded that thermal-contraction fracturing during cooling of ice-rich permafrost is most consistent with the sedimentary nature of the BF Unit, the morphology/geometry of the polygons, their restriction to the coarse-grained Gillespie Lake Member, and geologic context. Most of these polygons are confined to the Hesperian BF Unit and appear to be ancient, though individual polygon fractures may have been reactivated in more recent periods, perhaps due to stresses developed with exhumation or as the planet grew colder and drier. Some of the circular networks resemble ice-wedge polygons in thermokarst depressions and collapsed pingos, as seen in periglacial environments of the Arctic. An analog to collapsed pingos could be supported by modeling work of Andrews-Hanna et al. (2012, LPSC; 2012, 3rd Conf. Early Mars) suggesting that Gale was uniquely positioned for significant influx of ground water early its history. Also, results from Curiosity demonstrating limited chemical weathering and a past freshwater lake in YKB (Grotzinger et al., 2014, Science 343) would be consistent with an early periglacial setting. Our conclusions support an ancient, cold and wet periglacial landscape (Fairén et al., 2014, PSS 93-94) in this part of Gale - one with ice-wedge polygons, thermokarst features, ponded water, and possible ice-covered lakes and pingos in various stages of growth and decay (Oehler et al., 2016, Icarus 277).

  11. Measured Two-Dimensional Ice-Wedge Polygon Thermal and Active Layer Dynamics

    NASA Astrophysics Data System (ADS)

    Cable, W.; Romanovsky, V. E.; Busey, R.

    2016-12-01

    Ice-wedge polygons are perhaps the most dominant permafrost related features in the arctic landscape. The microtopography of these features, that includes rims, troughs, and high and low polygon centers, alters the local hydrology. During winter, wind redistribution of snow leads to an increased snowpack depth in the low areas, while the slightly higher areas often have very thin snow cover, leading to differences across the landscape in vegetation communities and soil moisture between higher and lower areas. To investigate the effect of microtopographic caused variation in surface conditions on the ground thermal regime, we established temperature transects, composed of five vertical array thermistor probes (VATP), across four different development stages of ice-wedge polygons near Barrow, Alaska. Each VATP had 16 thermistors from the surface to a depth of 1.5 m, for a total of 80 temperature measurements per polygon. We found snow cover, timing and depth, and active layer soil moisture to be major controlling factors in the observed thermal regimes. In troughs and in the centers of low-centered polygons, the combined effect of typically saturated soils and increased snow accumulation resulted in the highest mean annual ground temperatures (MAGT) and latest freezeback dates. While the centers of high-centered polygons, with thinner snow cover and a dryer active layer, had the lowest MAGT, earliest freezeback dates, and shallowest active layer. Refreezing of the active layer initiated at nearly the same time for all locations and polygons however, we found large differences in the proportion of downward versus upward freezing and the length of time required to complete the refreezing process between polygon types and locations. Using our four polygon stages as a space for time substitution, we conclude that ice-wedge degradation resulting in surface subsidence and trough deepening can lead to overall drying of the active layer and increased skewedness of snow distribution. Which in turn leads to shallower active layers, earlier freezeback dates, and lower MAGT. We also find that the large variation in active layer dynamics (active layer depth, downward vs upward freezing, and freezeback date) are important considerations to understanding and scaling biological processes occurring in these landscapes.

  12. Location memory for dots in polygons versus cities in regions: evaluating the category adjustment model.

    PubMed

    Friedman, Alinda; Montello, Daniel R; Burte, Heather

    2012-09-01

    We conducted 3 experiments to examine the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in circumstances in which the category boundaries were irregular schematized polygons made from outlines of maps. For the first time, accuracy was tested when only perceptual and/or existing long-term memory information about identical locations was cued. Participants from Alberta, Canada and California received 1 of 3 conditions: dots-only, in which a dot appeared within the polygon, and after a 4-s dynamic mask the empty polygon appeared and the participant indicated where the dot had been; dots-and-names, in which participants were told that the first polygon represented Alberta/California and that each dot was in the correct location for the city whose name appeared outside the polygon; and names-only, in which there was no first polygon, and participants clicked on the city locations from extant memory alone. Location recall in the dots-only and dots-and-names conditions did not differ from each other and had small but significant directional errors that pointed away from the centroids of the polygons. In contrast, the names-only condition had large and significant directional errors that pointed toward the centroids. Experiments 2 and 3 eliminated the distribution of stimuli and overall screen position as causal factors. The data suggest that in the "classic" category adjustment paradigm, it is difficult to determine a priori when Bayesian cue combination is applicable, making Bayesian analysis less useful as a theoretical approach to location estimation. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  13. A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆

    PubMed Central

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlog⁡n) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376

  14. WRIS: a resource information system for wildland management

    Treesearch

    Robert M. Russell; David A. Sharpnack; Elliot Amidon

    1975-01-01

    WRIS (Wildland Resource Information System) is a computer system for processing, storing, retrieving, updating, and displaying geographic data. The polygon, representing a land area boundary, forms the building block of WRIS. Polygons form a map. Maps are digitized manually or by automatic scanning. Computer programs can extract and produce polygon maps and can overlay...

  15. Origin of the Polygons and Underground Structures in Western Utopia Planitia on Mars

    NASA Technical Reports Server (NTRS)

    Yoshikawa, K.

    2002-01-01

    The area of lower albedo (Hvm) has a higher density of polygonal patterns. These patterns potentially suggest that 1) the polygonal pattern is caused primarily by ground heaving and collapsing, 2) lower albedo materials had higher tensile strength. Additional information is contained in the original extended abstract.

  16. Analysis of the Misconceptions of 7th Grade Students on Polygons and Specific Quadrilaterals

    ERIC Educational Resources Information Center

    Ozkan, Mustafa; Bal, Ayten Pinar

    2017-01-01

    Purpose: This study will find out student misconceptions about geometrical figures, particularly polygons and quadrilaterals. Thus, it will offer insights into teaching these concepts. The objective of this study, the question of "What are the misconceptions of seventh grade students on polygons and quadrilaterals?" constitutes the…

  17. The role of convexity in perception of symmetry and in visual short-term memory.

    PubMed

    Bertamini, Marco; Helmy, Mai Salah; Hulleman, Johan

    2013-01-01

    Visual perception of shape is affected by coding of local convexities and concavities. For instance, a recent study reported that deviations from symmetry carried by convexities were easier to detect than deviations carried by concavities. We removed some confounds and extended this work from a detection of reflection of a contour (i.e., bilateral symmetry), to a detection of repetition of a contour (i.e., translational symmetry). We tested whether any convexity advantage is specific to bilateral symmetry in a two-interval (Experiment 1) and a single-interval (Experiment 2) detection task. In both, we found a convexity advantage only for repetition. When we removed the need to choose which region of the contour to monitor (Experiment 3) the effect disappeared. In a second series of studies, we again used shapes with multiple convex or concave features. Participants performed a change detection task in which only one of the features could change. We did not find any evidence that convexities are special in visual short-term memory, when the to-be-remembered features only changed shape (Experiment 4), when they changed shape and changed from concave to convex and vice versa (Experiment 5), or when these conditions were mixed (Experiment 6). We did find a small advantage for coding convexity as well as concavity over an isolated (and thus ambiguous) contour. The latter is consistent with the known effect of closure on processing of shape. We conclude that convexity plays a role in many perceptual tasks but that it does not have a basic encoding advantage over concavity.

  18. Measuring Historical Coastal Change using GIS and the Change Polygon Approach

    USGS Publications Warehouse

    Smith, M.J.; Cromley, R.G.

    2012-01-01

    This study compares two automated approaches, the transect-from-baseline technique and a new change polygon method, for quantifying historical coastal change over time. The study shows that the transect-from-baseline technique is complicated by choice of a proper baseline as well as generating transects that intersect with each other rather than with the nearest shoreline. The change polygon method captures the full spatial difference between the positions of the two shorelines and average coastal change is the defined as the ratio of the net area divided by the shoreline length. Although then change polygon method is sensitive to the definition and measurement of shoreline length, the results are more invariant to parameter changes than the transect-from-baseline method, suggesting that the change polygon technique may be a more robust coastal change method. ?? 2012 Blackwell Publishing Ltd.

  19. SILHOUETTE - HIDDEN LINE COMPUTER CODE WITH GENERALIZED SILHOUETTE SOLUTION

    NASA Technical Reports Server (NTRS)

    Hedgley, D. R.

    1994-01-01

    Flexibility in choosing how to display computer-generated three-dimensional drawings has become increasingly important in recent years. A major consideration is the enhancement of the realism and aesthetics of the presentation. A polygonal representation of objects, even with hidden lines removed, is not always desirable. A more pleasing pictorial representation often can be achieved by removing some of the remaining visible lines, thus creating silhouettes (or outlines) of selected surfaces of the object. Additionally, it should be noted that this silhouette feature allows warped polygons. This means that any polygon can be decomposed into constituent triangles. Considering these triangles as members of the same family will present a polygon with no interior lines, and thus removes the restriction of flat polygons. SILHOUETTE is a program for calligraphic drawings that can render any subset of polygons as a silhouette with respect to itself. The program is flexible enough to be applicable to every class of object. SILHOUETTE offers all possible combinations of silhouette and nonsilhouette specifications for an arbitrary solid. Thus, it is possible to enhance the clarity of any three-dimensional scene presented in two dimensions. Input to the program can be line segments or polygons. Polygons designated with the same number will be drawn as a silhouette of those polygons. SILHOUETTE is written in FORTRAN 77 and requires a graphics package such as DI-3000. The program has been implemented on a DEC VAX series computer running VMS and used 65K of virtual memory without a graphics package linked in. The source code is intended to be machine independent. This program is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) and is also available on a 9-track 1600 BPI ASCII CARD IMAGE magnetic tape. SILHOUETTE was developed in 1986 and was last updated in 1992.

  20. Polar Polygon Patterns

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-338, 22 April 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image was taken during southern spring, as the seasonal carbon dioxide frost cap was subliming away. Frost remaining in shallow cracks and depressions reveals a fantastic polygonal pattern. Similar polygons occur in the Earth's arctic and antarctic regions-on Earth such polygons are related to the freeze and thaw of ground ice. The picture covers an area about 3 km (about 1.9 mi) wide near 71.9oS, 11.1oW. Sunlight illuminates the scene from the left.

  1. Integrating Carbon Flux Measurements with Hydrologic and Thermal Responses in a Low Centered Ice-Wedge Polygon near Prudhoe Bay, AK

    NASA Astrophysics Data System (ADS)

    Larson, T.; Young, M.; Caldwell, T. G.; Abolt, C.

    2014-12-01

    Substantial attention is being devoted to soil organic carbon (SOC) dynamics in Polar Regions, given the potential impacts of CO2 and methane (CH4) release into the atmosphere. In this study, which is part of a broader effort to quantify carbon loss pathways in patterned Arctic permafrost soils, CH4 and CO2 flux measurements were recorded from a site approximately 30 km south of Deadhorse, Alaska and 1 km west of the Dalton Highway. Samples were collected in late July, 2014 using six static flux chambers that were located within a single low-centered ice-wedge polygon. Three flux chambers were co-located (within a 1 m triangle of each other) near the center of the polygon and three were co-located (along a 1.5 m line) on the ridge adjacent to a trough. Soil in the center of the polygon was 100% water saturated, whereas water saturation measured on the ridge ranged between 25-50%. Depth to ice table was approximately 50 cm near the center of the polygon and 40 cm at the ridge. Temperature depth probes were installed within the center and ridge of the polygon. Nine gas measurements were collected from each chamber over a 24 h period, stored in helium-purged Exetainer vials, shipped to a laboratory, and analyzed using gas chromatography. Measured cumulative methane fluxes were linear over the 24 h period demonstrating constant methane production, but considerable spatial variability in flux was observed (0.1 to 4.7 mg hr-1 m-2 in polygon center, and 0.003 to 0.36 mg hr-1m-2 on polygon ridge). Shallow soil temperatures varied between 1.3 and 9.8oC in the center and 0.6 to 7.5oC in the rim of the polygon. Air temperatures varied between 1.3 and 4.6oC. CO2 fluxes were greater than methane fluxes and more consistent at each co-location; ranging from 21.7 to 36.6 mg hr-1 m-2 near the polygon centers and 3.5 to 29.1 mg hr-1 m-2 in the drier polygon ridge. Results are consistent with previous observations that methanogenesis is favored in a water saturated active layer. The independence of CH4 and CO2 fluxes suggests that different mechanisms may affect their formation and transport. Ongoing work on DOC and acetate concentrations may further elucidate the source of CH4 and CO2 flux. Results will be used to benchmark vertical SOC transport and active layer dynamics models, and then integrated into a Lidar-based geomorphic model for ice wedge polygon terrain.

  2. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  3. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  4. Kinematics of polygonal fault systems: observations from the northern North Sea

    NASA Astrophysics Data System (ADS)

    Wrona, Thilo; Magee, Craig; Jackson, Christopher A.-L.; Huuse, Mads; Taylor, Kevin G.

    2017-12-01

    Layer-bound, low-displacement normal faults, arranged into a broadly polygonal pattern, are common in many sedimentary basins. Despite having constrained their gross geometry, we have a relatively poor understanding of the processes controlling the nucleation and growth (i.e. the kinematics) of polygonal fault systems. In this study we use high-resolution 3-D seismic reflection and borehole data from the northern North Sea to undertake a detailed kinematic analysis of faults forming part of a seismically well-imaged polygonal fault system hosted within the up to 1000 m thick, Early Palaeocene-to-Middle Miocene mudstones of the Hordaland Group. Growth strata and displacement-depth profiles indicate faulting commenced during the Eocene to early Oligocene, with reactivation possibly occurring in the late Oligocene to middle Miocene. Mapping the position of displacement maxima on 137 polygonal faults suggests that the majority (64%) nucleated in the lower 500 m of the Hordaland Group. The uniform distribution of polygonal fault strikes in the area indicates that nucleation and growth were not driven by gravity or far-field tectonic extension as has previously been suggested. Instead, fault growth was likely facilitated by low coefficients of residual friction on existing slip surfaces, and probably involved significant layer-parallel contraction (strains of 0.01-0.19) of the host strata. To summarize, our kinematic analysis provides new insights into the spatial and temporal evolution of polygonal fault systems.

  5. The mere exposure effect for visual image.

    PubMed

    Inoue, Kazuya; Yagi, Yoshihiko; Sato, Nobuya

    2018-02-01

    Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.

  6. Vigorous convection as the explanation for Pluto’s polygonal terrain

    NASA Astrophysics Data System (ADS)

    Trowbridge, A. J.; Melosh, H. J.; Steckloff, J. K.; Freed, A. M.

    2016-06-01

    Pluto’s surface is surprisingly young and geologically active. One of its youngest terrains is the near-equatorial region informally named Sputnik Planum, which is a topographic basin filled by nitrogen (N2) ice mixed with minor amounts of CH4 and CO ices. Nearly the entire surface of the region is divided into irregular polygons about 20-30 kilometres in diameter, whose centres rise tens of metres above their sides. The edges of this region exhibit bulk flow features without polygons. Both thermal contraction and convection have been proposed to explain this terrain, but polygons formed from thermal contraction (analogous to ice-wedges or mud-crack networks) of N2 are inconsistent with the observations on Pluto of non-brittle deformation within the N2-ice sheet. Here we report a parameterized convection model to compute the Rayleigh number of the N2 ice and show that it is vigorously convecting, making Rayleigh-Bénard convection the most likely explanation for these polygons. The diameter of Sputnik Planum’s polygons and the dimensions of the ‘floating mountains’ (the hills of of water ice along the edges of the polygons) suggest that its N2 ice is about ten kilometres thick. The estimated convection velocity of 1.5 centimetres a year indicates a surface age of only around a million years.

  7. Tile Patterns with LOGO--Part III: Tile Patterns from Mult Tiles Using Logo.

    ERIC Educational Resources Information Center

    Clason, Robert G.

    1991-01-01

    A mult tile is a set of polygons each of which can be dissected into smaller polygons similar to the original set of polygons. Using a recursive LOGO method that requires solutions to various geometry and trigonometry problems, dissections of mult tiles are carried out repeatedly to produce tile patterns. (MDH)

  8. First Evaluation of the New Thin Convex Probe Endobronchial Ultrasound Scope: A Human Ex Vivo Lung Study.

    PubMed

    Patel, Priya; Wada, Hironobu; Hu, Hsin-Pei; Hirohashi, Kentaro; Kato, Tatsuya; Ujiie, Hideki; Ahn, Jin Young; Lee, Daiyoon; Geddie, William; Yasufuku, Kazuhiro

    2017-04-01

    Endobronchial ultrasonography (EBUS)-guided transbronchial needle aspiration allows for sampling of mediastinal lymph nodes. The external diameter, rigidity, and angulation of the convex probe EBUS renders limited accessibility. This study compares the accessibility and transbronchial needle aspiration capability of the prototype thin convex probe EBUS against the convex probe EBUS in human ex vivo lungs rejected for transplant. The prototype thin convex probe EBUS (BF-Y0055; Olympus, Tokyo, Japan) with a thinner tip (5.9 mm), greater upward angle (170 degrees), and decreased forward oblique direction of view (20 degrees) was compared with the current convex probe EBUS (6.9-mm tip, 120 degrees, and 35 degrees, respectively). Accessibility and transbronchial needle aspiration capability was assessed in ex vivo human lungs declined for lung transplant. The distance of maximum reach and sustainable endoscopic limit were measured. Transbronchial needle aspiration capability was assessed using the prototype 25G aspiration needle in segmental lymph nodes. In all evaluated lungs (n = 5), the thin convex probe EBUS demonstrated greater reach and a higher success rate, averaging 22.1 mm greater maximum reach and 10.3 mm further endoscopic visibility range than convex probe EBUS, and could assess selectively almost all segmental bronchi (98% right, 91% left), demonstrating nearly twice the accessibility as the convex probe EBUS (48% right, 47% left). The prototype successfully enabled cytologic assessment of subsegmental lymph nodes with adequate quality using the dedicated 25G aspiration needle. Thin convex probe EBUS has greater accessibility to peripheral airways in human lungs and is capable of sampling segmental lymph nodes using the aspiration needle. That will allow for more precise assessment of N1 nodes and, possibly, intrapulmonary lesions normally inaccessible to the conventional convex probe EBUS. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Near-Surface Profiles of Water Stable Isotope Components and Indicated Transitional History of Ice-Wedge Polygons Near Barrow

    NASA Astrophysics Data System (ADS)

    Iwahana, G.; Wilson, C.; Newman, B. D.; Heikoop, J. M.; Busey, R.

    2017-12-01

    Wetlands associated with ice-wedge polygons are commonly distributed across the Arctic Coastal Plain of northern Alaska, a region underlain by continuous permafrost. Micro-topography of the ice-wedge polygons controls local hydrology, and the micro-topography could be altered due to factors such like surface vegetation, wetness, freeze-thaw cycles, and permafrost degradation/aggradation under climate change. Understanding status of the wetlands in the near future is important because it determines biogeochemical cycle, which drives release of greenhouse gases from the ground. However, transitional regime of the ice-wedge polygons under the changing climate is not fully understood. In this study, we analyzed geochemistry of water extracted from frozen soil cores sampled down to about 1m depth in 2014 March at NGEE-Arctic sites in the Barrow Environmental Observatory. The cores were sampled from troughs/rims/centers of five different low-centered or flat-centered polygons. The frozen cores are divided into 5-10cm cores for each location, thawed in sealed plastic bags, and then extracted water was stored in vials. Comparison between the profiles of geochemistry indicated connection of soil water in the active layer at different location in a polygon, while it revealed that distinctly different water has been stored in permafrost layer at troughs/rims/centers of some polygons. Profiles of volumetric water content (VWC) showed clear signals of freeze-up desiccation in the middle of saturated active layers as low VWC anomalies at most sampling points. Water in the active layer and near-surface permafrost was classified into four categories: ice wedge / fresh meteoric / transitional / highly fractionated water. The overall results suggested prolonged separation of water in the active layer at the center of low-centered polygons without lateral connection in water path in the past.

  10. Structural characterization of the packings of granular regular polygons.

    PubMed

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  11. Polygons near Lyot Crater

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-564, 4 December 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows patterned ground, arranged in the form of polygons, on the undulating plains associated with ejecta from the Lyot impact crater on the martian northern plains. This picture was acquired in October 2003 and shows that the polygon margins are ridges with large boulders--shown here as dark dots--on them. On Earth, polygon patterns like this are created in arctic and antarctic regions where there is ice in the ground. The seasonal and longer-term cycles of freezing and thawing of the ice-rich ground cause these features to form over time. Whether the same is true for Mars is unknown. The polygons are located near 54.6oN, 326.6oW. The image covers an area 3 km (1.9 mi) wide and is illuminated from the lower left.

  12. FAST TRACK COMMUNICATION: The unusual asymptotics of three-sided prudent polygons

    NASA Astrophysics Data System (ADS)

    Beaton, Nicholas R.; Flajolet, Philippe; Guttmann, Anthony J.

    2010-08-01

    We have studied the area-generating function of prudent polygons on the square lattice. Exact solutions are obtained for the generating function of two-sided and three-sided prudent polygons, and a functional equation is found for four-sided prudent polygons. This is used to generate series coefficients in polynomial time, and these are analysed to determine the asymptotics numerically. A careful asymptotic analysis of the three-sided polygons produces a most surprising result. A transcendental critical exponent is found, and the leading amplitude is not quite a constant, but is a constant plus a small oscillatory component with an amplitude approximately 10-8 times that of the leading amplitude. This effect cannot be seen by any standard numerical analysis, but it may be present in other models. If so, it changes our whole view of the asymptotic behaviour of lattice models.

  13. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  14. Radius of convexity of a certain class of close-to-convex functions

    NASA Astrophysics Data System (ADS)

    Yahya, Abdullah; Soh, Shaharuddin Cik

    2017-11-01

    In the present paper, we consider and investigate a certain class of close-to-convex functions that defined in the unit disk, U = {z : |z| < 1}, which denotes as Re { ei αz/f '(z ) f (z )-f (-z ) } >δ where |α| < π, cos (α) > δ and 0 δ <1. Furthermore, we obtain preliminary result for bound f'(z) and determine result for radius of convexity.

  15. Convex Graph Invariants

    DTIC Science & Technology

    2010-12-02

    Motzkin, T. and Straus, E. (1965). Maxima for graphs and a new proof of a theorem of Turan . Canad. J. Math. 17 533–540. [33] Rendl, F. and Sotirov, R...Convex Graph Invariants Venkat Chandrasekaran, Pablo A . Parrilo, and Alan S. Willsky ∗ Laboratory for Information and Decision Systems Department of...this paper we study convex graph invariants, which are graph invariants that are convex functions of the adjacency matrix of a graph. Some examples

  16. Allometric relationships between traveltime channel networks, convex hulls, and convexity measures

    NASA Astrophysics Data System (ADS)

    Tay, Lea Tien; Sagar, B. S. Daya; Chuah, Hean Teik

    2006-06-01

    The channel network (S) is a nonconvex set, while its basin [C(S)] is convex. We remove open-end points of the channel connectivity network iteratively to generate a traveltime sequence of networks (Sn). The convex hulls of these traveltime networks provide an interesting topological quantity, which has not been noted thus far. We compute lengths of shrinking traveltime networks L(Sn) and areas of corresponding convex hulls C(Sn), the ratios of which provide convexity measures CM(Sn) of traveltime networks. A statistically significant scaling relationship is found for a model network in the form L(Sn) ˜ A[C(Sn)]0.57. From the plots of the lengths of these traveltime networks and the areas of their corresponding convex hulls as functions of convexity measures, new power law relations are derived. Such relations for a model network are CM(Sn) ˜ ? and CM(Sn) ˜ ?. In addition to the model study, these relations for networks derived from seven subbasins of Cameron Highlands region of Peninsular Malaysia are provided. Further studies are needed on a large number of channel networks of distinct sizes and topologies to understand the relationships of these new exponents with other scaling exponents that define the scaling structure of river networks.

  17. Time-frequency filtering and synthesis from convex projections

    NASA Astrophysics Data System (ADS)

    White, Langford B.

    1990-11-01

    This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.

  18. Abandoned Uranium Mines (AUM) Site Screening Map Service, 2016, US EPA Region 9

    EPA Pesticide Factsheets

    As described in detail in the Five-Year Report, US EPA completed on-the-ground screening of 521 abandoned uranium mine areas. US EPA and the Navajo EPA are using the Comprehensive Database and Atlas to determine which mines should be cleaned up first. US EPA continues to research and identify Potentially Responsible Parties (PRPs) under Superfund to contribute to the costs of cleanup efforts.This US EPA Region 9 web service contains the following map layers:Abandoned Uranium Mines, Priority Mines, Tronox Mines, Navajo Environmental Response Trust Mines, Mines with Enforcement Actions, Superfund AUM Regions, Navajo Nation Administrative Boundaries and Chapter Houses.Mine points have a maximum scale of 1:220,000, while Mine polygons have a minimum scale of 1:220,000. Chapter houses have a minimum scale of 1:200,000. BLM Land Status has a minimum scale of 1:150,000.Full FGDC metadata records for each layer can be found by clicking the layer name at the web service endpoint and viewing the layer description. Data used to create this web service are available for download at https://edg.epa.gov/metadata/catalog/data/data.page.Security Classification: Public. Access Constraints: None. Use Constraints: None. Please check sources, scale, accuracy, currentness and other available information. Please confirm that you are using the most recent copy of both data and metadata. Acknowledgement of the EPA would be appreciated.

  19. Finding the Maximal Area of Bounded Polygons in a Circle

    ERIC Educational Resources Information Center

    Rokach, Arie

    2005-01-01

    The article deals with the area of polygons that are inscribed in a given circle. Naturally, the following question arises: Among all n-polygons that are inscribed in a given circle, which one has the biggest area? Intuitively, it may be guessed that is suitable for secondary students, and without any use id calculus, but only using very…

  20. ON THE THEORY AND PROCEDURE FOR CONSTRUCTING A MINIMAL-LENGTH, AREA-CONSERVING FREQUENCY POLYGON FROM GROUPED DATA.

    ERIC Educational Resources Information Center

    CASE, C. MARSTON

    THIS PAPER IS CONCERNED WITH GRAPHIC PRESENTATION AND ANALYSIS OF GROUPED OBSERVATIONS. IT PRESENTS A METHOD AND SUPPORTING THEORY FOR THE CONSTRUCTION OF AN AREA-CONSERVING, MINIMAL LENGTH FREQUENCY POLYGON CORRESPONDING TO A GIVEN HISTOGRAM. TRADITIONALLY, THE CONCEPT OF A FREQUENCY POLYGON CORRESPONDING TO A GIVEN HISTOGRAM HAS REFERRED TO THAT…

  1. Polygons on Crater Floor

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-357, 11 May 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) picture shows a pattern of polygons on the floor of a northern plains impact crater. These landforms are common on crater floors at high latitudes on Mars. Similar polygons occur in the arctic and antarctic regions of Earth, where they indicate the presence and freeze-thaw cycling of ground ice. Whether the polygons on Mars also indicate water ice in the ground is uncertain. The image is located in a crater at 64.8oN, 292.7oW. Sunlight illuminates the scene from the lower left.

  2. A Polygon Model for Wireless Sensor Network Deployment with Directional Sensing Areas

    PubMed Central

    Wu, Chun-Hsien; Chung, Yeh-Ching

    2009-01-01

    The modeling of the sensing area of a sensor node is essential for the deployment algorithm of wireless sensor networks (WSNs). In this paper, a polygon model is proposed for the sensor node with directional sensing area. In addition, a WSN deployment algorithm is presented with topology control and scoring mechanisms to maintain network connectivity and improve sensing coverage rate. To evaluate the proposed polygon model and WSN deployment algorithm, a simulation is conducted. The simulation results show that the proposed polygon model outperforms the existed disk model and circular sector model in terms of the maximum sensing coverage rate. PMID:22303159

  3. Mola Topography Supports Drape-Folding Models for Polygonal Terrain of Utopia Planitia, Mars

    NASA Technical Reports Server (NTRS)

    McGill, George E.; Buczkowski, D. L.

    2002-01-01

    One of the most important questions we ask about Mars is whether or not there have ever been large bodies of standing water on the surface. The polygonal terrains of Utopia and Acidalia Planitiae are located in the lowest parts of the northern lowlands, the most logical places for water to pond and sediments to accumulate. Showing that polygonal terrain is sedimentary in origin would represent strong evidence in favor of a northern ocean. A number of hypotheses for the origin of the giant martian polygons have been proposed, from the cooling of lava to frost wedging to the desiccation of wet sediments, but Pechman showed that none of these familiar processes could be scaled up to martian dimensions. Two models for polygon origin attempt to explain the scale of the martian polygons by postulating drape folding of a cover material, either sedimentary or volcanic, over an uneven, buried surface. The drape folding would produce bending stresses in the surface layers that increase the probability of Fracturing over drape anticlines and suppress the probability of fracturing over drape synclines. However, both models require an additional source of extensional strain to produce the total strain needed to produce the observed troughs.

  4. A finite-time exponent for random Ehrenfest gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moudgalya, Sanjay; Chandra, Sarthak; Jain, Sudhir R., E-mail: srjain@barc.gov.in

    2015-10-15

    We consider the motion of a system of free particles moving on a plane with regular hard polygonal scatterers arranged in a random manner. Calling this the Ehrenfest gas, which is known to have a zero Lyapunov exponent, we propose a finite-time exponent to characterize its dynamics. As the number of sides of the polygon goes to infinity, when polygon tends to a circle, we recover the usual Lyapunov exponent for the Lorentz gas from the exponent proposed here. To obtain this result, we generalize the reflection law of a beam of rays incident on a polygonal scatterer in amore » way that the formula for the circular scatterer is recovered in the limit of infinite number of vertices. Thus, chaos emerges from pseudochaos in an appropriate limit. - Highlights: • We present a finite-time exponent for particles moving in a plane containing polygonal scatterers. • The exponent found recovers the Lyapunov exponent in the limit of the polygon becoming a circle. • Our findings unify pseudointegrable and chaotic scattering via a generalized collision rule. • Stretch and fold:shuffle and cut :: Lyapunov:finite-time exponent :: fluid:granular mixing.« less

  5. Clusters in irregular areas and lattices.

    PubMed

    Wieczorek, William F; Delmerico, Alan M; Rogerson, Peter A; Wong, David W S

    2012-01-01

    Geographic areas of different sizes and shapes of polygons that represent counts or rate data are often encountered in social, economic, health, and other information. Often political or census boundaries are used to define these areas because the information is available only for those geographies. Therefore, these types of boundaries are frequently used to define neighborhoods in spatial analyses using geographic information systems and related approaches such as multilevel models. When point data can be geocoded, it is possible to examine the impact of polygon shape on spatial statistical properties, such as clustering. We utilized point data (alcohol outlets) to examine the issue of polygon shape and size on visualization and statistical properties. The point data were allocated to regular lattices (hexagons and squares) and census areas for zip-code tabulation areas and tracts. The number of units in the lattices was set to be similar to the number of tract and zip-code areas. A spatial clustering statistic and visualization were used to assess the impact of polygon shape for zip- and tract-sized units. Results showed substantial similarities and notable differences across shape and size. The specific circumstances of a spatial analysis that aggregates points to polygons will determine the size and shape of the areal units to be used. The irregular polygons of census units may reflect underlying characteristics that could be missed by large regular lattices. Future research to examine the potential for using a combination of irregular polygons and regular lattices would be useful.

  6. Image segmentation by hierarchial agglomeration of polygons using ecological statistics

    DOEpatents

    Prasad, Lakshman; Swaminarayan, Sriram

    2013-04-23

    A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.

  7. CPU timing routines for a CONVEX C220 computer system

    NASA Technical Reports Server (NTRS)

    Bynum, Mary Ann

    1989-01-01

    The timing routines available on the CONVEX C220 computer system in the Structural Mechanics Division (SMD) at NASA Langley Research Center are examined. The function of the timing routines, the use of the timing routines in sequential, parallel, and vector code, and the interpretation of the results from the timing routines with respect to the CONVEX model of computing are described. The timing routines available on the SMD CONVEX fall into two groups. The first group includes standard timing routines generally available with UNIX 4.3 BSD operating systems, while the second group includes routines unique to the SMD CONVEX. The standard timing routines described in this report are /bin/csh time,/bin/time, etime, and ctime. The routines unique to the SMD CONVEX are getinfo, second, cputime, toc, and a parallel profiling package made up of palprof, palinit, and palsum.

  8. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  9. Global optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  10. Balancing building and maintenance costs in growing transport networks

    NASA Astrophysics Data System (ADS)

    Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco

    2017-09-01

    The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.

  11. The Cartilage Warp Prevention Suture.

    PubMed

    Guyuron, Bahman; Wang, Derek Z; Kurlander, David E

    2018-06-01

    Costal cartilage graft warping can challenge rhinoplasty surgeons and compromise outcomes. We propose a technique, the "warp control suture," for eliminating cartilage warp and examine outcomes in a pilot group. The warp control suture is performed in the following manner: Harvested cartilage is cut to the desired shape and immersed in saline to induce warping. A 4-0 or 5-0 PDS suture, depending the thickness of the cartilage, is passed from convex to concave then concave to convex side several times about 5-6 mm apart, finally tying the suture on the convex side with sufficient tension to straighten the cartilage. First an ex vivo experiment was performed in 10 specimens from 10 different patients. Excess cartilage was sutured and returned to saline for a minimum of 15 min and then assessed for warping compared to cartilage cut in the identical shape also soaked in saline. Then, charts of nine subsequent patients who received the warp control suture on 16 cartilage grafts by the senior author (BG) were retrospectively reviewed. Inclusion of study subjects required at least 6 months of follow-up with standard rhinoplasty photographs. Postoperative complications and evidence of warping were recorded. In the ex vivo experiment, none of the 10 segments demonstrated warping after replacement in saline, whereas all the matching segments demonstrated significant additional warping. Clinically, no postoperative warping was observed in any of the nine patients at least 6 months postoperatively. One case of minor infection was observed in an area away from the graft and treated with antibiotics. No warping or other complications were noted. The warp control suture technique presented here effectively straightens warped cartilage graft and prevents additional warping. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  12. Preliminary investigations into macroscopic attenuated total reflection-fourier transform infrared imaging of intact spherical domains: spatial resolution and image distortion.

    PubMed

    Everall, Neil J; Priestnall, Ian M; Clarke, Fiona; Jayes, Linda; Poulter, Graham; Coombs, David; George, Michael W

    2009-03-01

    This paper describes preliminary investigations into the spatial resolution of macro attenuated total reflection (ATR) Fourier transform infrared (FT-IR) imaging and the distortions that arise when imaging intact, convex domains, using spheres as an extreme example. The competing effects of shallow evanescent wave penetration and blurring due to finite spatial resolution meant that spheres within the range 20-140 microm all appeared to be approximately the same size ( approximately 30-35 microm) when imaged with a numerical aperture (NA) of approximately 0.2. A very simple model was developed that predicted this extreme insensitivity to particle size. On the basis of these studies, it is anticipated that ATR imaging at this NA will be insensitive to the size of intact highly convex objects. A higher numerical aperture device should give a better estimate of the size of small spheres, owing to superior spatial resolution, but large spheres should still appear undersized due to the shallow sampling depth. An estimate of the point spread function (PSF) was required in order to develop and apply the model. The PSF was measured by imaging a sharp interface; assuming an Airy profile, the PSF width (distance from central maximum to first minimum) was estimated to be approximately 20 and 30 microm for IR bands at 1600 and 1000 cm(-1), respectively. This work has two significant limitations. First, underestimation of domain size only arises when imaging intact convex objects; if surfaces are prepared that randomly and representatively section through domains, the images can be analyzed to calculate parameters such as domain size, area, and volume. Second, the model ignores reflection and refraction and assumes weak absorption; hence, the predicted intensity profiles are not expected to be accurate; they merely give a rough estimate of the apparent sphere size. Much further work is required to place the field of quantitative ATR-FT-IR imaging on a sound basis.

  13. Self-assembly of a constitutional dynamic library of Cu(II) coordination polygons and reversible sorting by crystallization.

    PubMed

    Rancan, Marzio; Tessarolo, Jacopo; Zanonato, Pier Luigi; Seraglia, Roberta; Quici, Silvio; Armelao, Lidia

    2013-06-07

    A small coordination constitutional dynamic library (CDL) is self-assembled from Cu(2+) ions and the ortho bis-(3-acetylacetone)benzene ligand. Two coordination polygons, a rhomboid and a triangle, establish a dynamic equilibrium. Quantitative sorting of the rhomboidal polygon is reversibly obtained by crystallization. Thermodynamic and kinetic aspects ruling the CDL system have been elucidated.

  14. The Effect of Inquiry-Based Explorations in a Dynamic Geometry Environment on Sixth Grade Students' Achievements in Polygons

    ERIC Educational Resources Information Center

    Erbas, Ayhan Kursat; Yenmez, Arzu Aydogan

    2011-01-01

    The purpose of this study was to investigate the effects of using a dynamic geometry environment (DGE) together with inquiry-based explorations on the sixth grade students' achievements in polygons and congruency and similarity of polygons. Two groups of sixth grade students were selected for this study: an experimental group composed of 66…

  15. Design and analysis of a curved cylindrical Fresnel lens that produces high irradiance uniformity on the solar cell.

    PubMed

    González, Juan C

    2009-04-10

    A new type of convex Fresnel lens for linear photovoltaic concentration systems is presented. The lens designed with this method reaches 100% of geometrical optical efficiency, and the ratio (Aperture area)/(Receptor area) is up to 75% of the theoretical limit. The main goal of the design is high uniformity of the radiation on the cell surface for each input angle inside the acceptance. The ratio between the maximum and the minimum irradiance on points of the solar cell is less than 2. The lens has been designed with the simultaneous multiple surfaces (SMS) method of nonimaging optics, and ray tracing techniques have been used to characterize its performance for linear symmetry systems.

  16. Analysis of effect of the solubility on gas exchange in nonhomogeneous lungs

    NASA Technical Reports Server (NTRS)

    Colburn, W. E., Jr.; Evans, J. W.; West, J. B.

    1974-01-01

    A comparison is made of the gas exchange in nonhomogeneous lung models and in homogeneous lung models with the same total blood flow and ventilation. It is shown that the ratio of the rate of gas transfer of the inhomogeneous lung model over the rate of gas transfer of the homogeneous lung model as a function of gas solubility always has the qualitative features for gases with linear dissociation curves. This ratio is 1 for a gas with zero solubility and decreases to a single minimum. It subsequently rises to approach 1 as the solubility tends to infinity. The early portion of the graph of this function is convex, then after a single inflection point it is concave.

  17. CVXPY: A Python-Embedded Modeling Language for Convex Optimization.

    PubMed

    Diamond, Steven; Boyd, Stephen

    2016-04-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.

  18. Usefulness of the convexity apparent hyperperfusion sign in 123I-iodoamphetamine brain perfusion SPECT for the diagnosis of idiopathic normal pressure hydrocephalus.

    PubMed

    Ohmichi, Takuma; Kondo, Masaki; Itsukage, Masahiro; Koizumi, Hidetaka; Matsushima, Shigenori; Kuriyama, Nagato; Ishii, Kazunari; Mori, Etsuro; Yamada, Kei; Mizuno, Toshiki; Tokuda, Takahiko

    2018-03-16

    OBJECTIVE The gold standard for the diagnosis of idiopathic normal pressure hydrocephalus (iNPH) is the CSF removal test. For elderly patients, however, a less invasive diagnostic method is required. On MRI, high-convexity tightness was reported to be an important finding for the diagnosis of iNPH. On SPECT, patients with iNPH often show hyperperfusion of the high-convexity area. The authors tested 2 hypotheses regarding the SPECT finding: 1) it is relative hyperperfusion reflecting the increased gray matter density of the convexity, and 2) it is useful for the diagnosis of iNPH. The authors termed the SPECT finding the convexity apparent hyperperfusion (CAPPAH) sign. METHODS Two clinical studies were conducted. In study 1, SPECT was performed for 20 patients suspected of having iNPH, and regional cerebral blood flow (rCBF) of the high-convexity area was examined using quantitative analysis. Clinical differences between patients with the CAPPAH sign (CAP) and those without it (NCAP) were also compared. In study 2, the CAPPAH sign was retrospectively assessed in 30 patients with iNPH and 19 healthy controls using SPECT images and 3D stereotactic surface projection. RESULTS In study 1, rCBF of the high-convexity area of the CAP group was calculated as 35.2-43.7 ml/min/100 g, which is not higher than normal values of rCBF determined by SPECT. The NCAP group showed lower cognitive function and weaker responses to the removal of CSF than the CAP group. In study 2, the CAPPAH sign was positive only in patients with iNPH (24/30) and not in controls (sensitivity 80%, specificity 100%). The coincidence rate between tight high convexity on MRI and the CAPPAH sign was very high (28/30). CONCLUSIONS Patients with iNPH showed hyperperfusion of the high-convexity area on SPECT; however, the presence of the CAPPAH sign did not indicate real hyperperfusion of rCBF in the high-convexity area. The authors speculated that patients with iNPH without the CAPPAH sign, despite showing tight high convexity on MRI, might have comorbidities such as Alzheimer's disease.

  19. Home range and use of diurnal shelters by the Etendeka round-eared sengi, a newly discovered Namibian endemic desert mammal.

    PubMed

    Rathbun, Galen B; Dumbacher, John P

    2015-01-01

    To understand habitat use by the newly described Etendeka round-eared sengi (Macroscelides micus) in northwestern Namibia, we radio-tracked five individuals for nearly a month. Home ranges (100% convex polygons) in the rocky desert habitat were remarkably large (mean 14.9 ha) when compared to sengi species in more mesic habitats (<1.5 ha). The activity pattern of M. micus was strictly nocturnal, which contrasts to the normal diurnal or crepuscular activity of other sengis. The day shelters of M. micus were under single rocks and they likely were occupied by single sengis. One tagged sengi used 22 different day shelters during the study. On average, only 7% of the day shelters were used more than once by the five tagged sengis. The shelters were also unusual for a small mammal in that they were unmodified in terms of excavation or nesting material. Shelter entrances were significantly oriented to face south by south west (average 193°), away from the angle of the prevailing midday sun. This suggests that solar radiation is probably an important aspect of M. micus thermal ecology, similar to other sengis. Compared to published data on other sengis, M. micus generally conforms to the unique sengi adaptive syndrome, but with modifications related to its hyper-arid habitat.

  20. Frosty Polygons

    NASA Technical Reports Server (NTRS)

    2004-01-01

    16 January 2004 Looking somewhat like a roadmap, this 3 km (1.9 mi) wide view of a cratered plain in the martian south polar region shows a plethora of cracks that form polygonal patterns. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image is located near 78.9oS, 357.3oW. Polygons such as these, where they are found on Earth, would be indicators of the presence of subsurface ice. Whether the same is true for Mars is uncertain. What is certain is that modern, seasonal frost on the surface enhances the appearance of the polygons as the frost persists longer in the cracks than on adjacent plains. This southern springtime image is illuminated by sunlight from the upper left.

  1. Design and simulation of MEMS-actuated adjustable optical wedge for laser beam scanners

    NASA Astrophysics Data System (ADS)

    Bahgat, Ahmed S.; Zaki, Ahmed H.; Abdo Mohamed, Mohamed; El Sherif, Ashraf Fathy

    2018-01-01

    This paper introduces both optical and mechanical design and simulation of large static deflection MOEMS actuator. The designed device is in the form of an adjustable optical wedge (AOW) laser scanner. The AOW is formed of 1.5-mm-diameter plano-convex lens separated by air gap from plano-concave fixed lens. The convex lens is actuated by staggered vertical comb drive and suspended by rectangular cross-section torsion beam. An optical analysis and simulation of air separated AOW as well as detailed design, analysis, and static simulation of comb -drive are introduced. The dynamic step response of the full system is also introduced. The analytical solution showed a good agreement with the simulation results. A general global minimum optimization algorithm is applied to the comb-drive design to minimize driving voltage. A maximum comb-drive mechanical deflection angle of 12 deg in each direction was obtained under DC actuation voltage of 32 V with a settling time of 90 ms, leading to 1-mm one-dimensional (1-D) steering of laser beam with continuous optical scan angle of 5 deg in each direction. This optimization process provided a design of larger deflection actuator with smaller driving voltage compared with other conventional devices. This enhancement could lead to better performance of MOEMS-based laser beam scanners for imaging and low-speed applications.

  2. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less

  3. Self-Assembly of Flux-Closure Polygons from Magnetite Nanocubes.

    PubMed

    Szyndler, Megan W; Corn, Robert M

    2012-09-06

    Well-defined nanoscale flux-closure polygons (nanogons) have been fabricated on hydrophilic surfaces from the face-to-face self-assembly of magnetite nanocubes. Uniform ferrimagnetic magnetite nanocubes (∼86 nm) were synthesized and characterized with a combination of electron microscopy, diffraction, and magnetization measurements. The nanocubes were subsequently cast onto hydrophilic substrates, wherein the cubes lined up face-to-face and formed a variety of polygons due to magnetostatic and hydrophobic interactions. The generated surfaces consist primarily of three- and four-sided nanogons; polygons ranging from two to six sides were also observed. Further examination of the nanogons showed that the constraints of the face-to-face assembly of nanocubes often led to bowed sides, strained cube geometries, and mismatches at the acute angle vertices. Additionally, extra nanocubes were often present at the vertices, suggesting the presence of external magnetostatic fields at the polygon corners. These nanogons are inimitable nanoscale magnetic structures with potential applications in the areas of magnetic memory storage and high-frequency magnetics.

  4. Total curvature and total torsion of knotted random polygons in confinement

    NASA Astrophysics Data System (ADS)

    Diao, Yuanan; Ernst, Claus; Rawdon, Eric J.; Ziegler, Uta

    2018-04-01

    Knots in nature are typically confined spatially. The confinement affects the possible configurations, which in turn affects the spectrum of possible knot types as well as the geometry of the configurations within each knot type. The goal of this paper is to determine how confinement, length, and knotting affect the total curvature and total torsion of random polygons. Previously published papers have investigated these effects in the unconstrained case. In particular, we analyze how the total curvature and total torsion are affected by (1) varying the length of polygons within a fixed confinement radius and (2) varying the confinement radius of polygons with a fixed length. We also compare the total curvature and total torsion of groups of knots with similar complexity (measured as crossing number). While some of our results fall in line with what has been observed in the studies of the unconfined random polygons, a few surprising results emerge from our study, showing some properties that are unique due to the effect of knotting in confinement.

  5. Statistical and hydrodynamic properties of double-ring polymers with a fixed linking number between twin rings.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2014-01-28

    For a double-ring polymer in solution we evaluate the mean-square radius of gyration and the diffusion coefficient through simulation of off-lattice self-avoiding double polygons consisting of cylindrical segments with radius rex of unit length. Here, a self-avoiding double polygon consists of twin self-avoiding polygons which are connected by a cylindrical segment. We show numerically that several statistical and dynamical properties of double-ring polymers in solution depend on the linking number of the constituent twin ring polymers. The ratio of the mean-square radius of gyration of self-avoiding double polygons with zero linking number to that of no topological constraint is larger than 1, in particular, when the radius of cylindrical segments rex is small. However, the ratio is almost constant with respect to the number of vertices, N, and does not depend on N. The large-N behavior of topological swelling is thus quite different from the case of knotted random polygons.

  6. Knot probability of polygons subjected to a force: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Orlandini, E.; Tesi, M. C.; Whittington, S. G.

    2008-01-01

    We use Monte Carlo methods to study the knot probability of lattice polygons on the cubic lattice in the presence of an external force f. The force is coupled to the span of the polygons along a lattice direction, say the z-direction. If the force is negative polygons are squeezed (the compressive regime), while positive forces tend to stretch the polygons along the z-direction (the tensile regime). For sufficiently large positive forces we verify that the Pincus scaling law in the force-extension curve holds. At a fixed number of edges n the knot probability is a decreasing function of the force. For a fixed force the knot probability approaches unity as 1 - exp(-α0(f)n + o(n)), where α0(f) is positive and a decreasing function of f. We also examine the average of the absolute value of the writhe and we verify the square root growth law (known for f = 0) for all values of f.

  7. Polygonal shaft hole rotor

    DOEpatents

    Hussey, John H.; Rose, John Scott; Meystrik, Jeffrey J.; White, Kent Lee

    2001-01-23

    A laminated rotor for an induction motor has a plurality of ferro-magnetic laminations mounted axially on a rotor shaft. Each of the plurality of laminations has a central aperture in the shape of a polygon with sides of equal length. The laminations are alternatingly rotated 180.degree. from one another so that the straight sides of the polygon shaped apertures are misaligned. As a circular rotor shaft is press fit into a stack of laminations, the point of maximum interference occurs at the midpoints of the sides of the polygon (i.e., at the smallest radius of the central apertures of the laminations). Because the laminates are alternatingly rotated, the laminate material at the points of maximum interference yields relatively easily into the vertices (i.e., the greatest radius of the central aperture) of the polygonal central aperture of the next lamination as the shaft is inserted into the stack of laminations. Because of this yielding process, the amount of force required to insert the shaft is reduced, and a tighter fit is achieved.

  8. Generating equilateral random polygons in confinement

    NASA Astrophysics Data System (ADS)

    Diao, Y.; Ernst, C.; Montemayor, A.; Ziegler, U.

    2011-10-01

    One challenging problem in biology is to understand the mechanism of DNA packing in a confined volume such as a cell. It is known that confined circular DNA is often knotted and hence the topology of the extracted (and relaxed) circular DNA can be used as a probe of the DNA packing mechanism. However, in order to properly estimate the topological properties of the confined circular DNA structures using mathematical models, it is necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths that are confined in a volume such as a sphere of certain fixed radius. Finding efficient algorithms that properly sample the space of such confined equilateral random polygons is a difficult problem. In this paper, we propose a method that generates confined equilateral random polygons based on their probability distribution. This method requires the creation of a large database initially. However, once the database has been created, a confined equilateral random polygon of length n can be generated in linear time in terms of n. The errors introduced by the method can be controlled and reduced by the refinement of the database. Furthermore, our numerical simulations indicate that these errors are unbiased and tend to cancel each other in a long polygon.

  9. Polygonal Craters on Dwarf-Planet Ceres

    NASA Astrophysics Data System (ADS)

    Otto, K. A.; Jaumann, R.; Krohn, K.; Buczkowski, D. L.; von der Gathen, I.; Kersten, E.; Mest, S. C.; Preusker, F.; Roatsch, T.; Schenk, P. M.; Schröder, S.; Schulzeck, F.; Scully, J. E. C.; Stepahn, K.; Wagner, R.; Williams, D. A.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    With approximately 950 km diameter and a mass of #1/3 of the total mass of the asteroid belt, (1) Ceres is the largest and most massive object in the Main Asteroid Belt. As an intact proto-planet, Ceres is key to understanding the origin and evolution of the terrestrialplanets [1]. In particular, the role of water during planet formation is of interest, because the differentiated dwarf-planet is thought to possess a water rich mantle overlying a rocky core [2]. The Dawn space craft arrived at Ceres in March this year after completing its mission at (4) Vesta. At Ceres, the on-board Framing Camera (FC) collected image data which revealed a large variety of impact crater morphologies including polygonal craters (Figure 1). Polygonal craters show straight rim sections aligned to form an angular shape. They are commonly associated with fractures in the target material. Simple polygonal craters develop during the excavation stage when the excavation flow propagates faster along preexisting fractures [3, 5]. Complex polygonal craters adopt their shape during the modification stage when slumping along fractures is favoured [3]. Polygonal craters are known from a variety of planetary bodies including Earth [e.g. 4], the Moon [e.g. 5], Mars [e.g. 6], Mercury [e.g. 7], Venus [e.g. 8] and outer Solar System icy satellites [e.g. 9].

  10. CVXPY: A Python-Embedded Modeling Language for Convex Optimization

    PubMed Central

    Diamond, Steven; Boyd, Stephen

    2016-01-01

    CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369

  11. Multi-Level Building Reconstruction for Automatic Enhancement of High Resolution Dsms

    NASA Astrophysics Data System (ADS)

    Arefi, H.; Reinartz, P.

    2012-07-01

    In this article a multi-level approach is proposed for reconstruction-based improvement of high resolution Digital Surface Models (DSMs). The concept of Levels of Detail (LOD) defined by CityGML standard has been considered as basis for abstraction levels of building roof structures. Here, the LOD1 and LOD2 which are related to prismatic and parametric roof shapes are reconstructed. Besides proposing a new approach for automatic LOD1 and LOD2 generation from high resolution DSMs, the algorithm contains two generalization levels namely horizontal and vertical. Both generalization levels are applied to prismatic model of buildings. The horizontal generalization allows controlling the approximation level of building footprints which is similar to cartographic generalization concept of the urban maps. In vertical generalization, the prismatic model is formed using an individual building height and continuous to included all flat structures locating in different height levels. The concept of LOD1 generation is based on approximation of the building footprints into rectangular or non-rectangular polygons. For a rectangular building containing one main orientation a method based on Minimum Bounding Rectangle (MBR) in employed. In contrast, a Combined Minimum Bounding Rectangle (CMBR) approach is proposed for regularization of non-rectilinear polygons, i.e. buildings without perpendicular edge directions. Both MBRand CMBR-based approaches are iteratively employed on building segments to reduce the original building footprints to a minimum number of nodes with maximum similarity to original shapes. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed for LOD2 generation. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines. The 3D model is derived for each building part and finally, a complete parametric model is formed by merging all the 3D models of the individual parts and adjusting the nodes after the merging step. In order to provide an enhanced DSM, a surface model is provided for each building by interpolation of the internal points of the generated models. All interpolated models are situated on a Digital Terrain Model (DTM) of corresponding area to shape the enhanced DSM. Proposed DSM enhancement approach has been tested on a dataset from Munich central area. The original DSM is created using robust stereo matching of Worldview-2 stereo images. A quantitative assessment of the new DSM by comparing the heights of the ridges and eaves shows a standard deviation of better than 50cm.

  12. Methods for rapidly processing angular masks of next-generation galaxy surveys

    NASA Astrophysics Data System (ADS)

    Swanson, M. E. C.; Tegmark, Max; Hamilton, Andrew J. S.; Hill, J. Colin

    2008-07-01

    As galaxy surveys become larger and more complex, keeping track of the completeness, magnitude limit and other survey parameters as a function of direction on the sky becomes an increasingly challenging computational task. For example, typical angular masks of the Sloan Digital Sky Survey contain about N = 300000 distinct spherical polygons. Managing masks with such large numbers of polygons becomes intractably slow, particularly for tasks that run in time with a naive algorithm, such as finding which polygons overlap each other. Here we present a `divide-and-conquer' solution to this challenge: we first split the angular mask into pre-defined regions called `pixels', such that each polygon is in only one pixel, and then perform further computations, such as checking for overlap, on the polygons within each pixel separately. This reduces tasks to , and also reduces the important task of determining in which polygon(s) a point on the sky lies from to , resulting in significant computational speedup. Additionally, we present a method to efficiently convert any angular mask to and from the popular HEALPIX format. This method can be generically applied to convert to and from any desired spherical pixelization. We have implemented these techniques in a new version of the MANGLE software package, which is freely available at http://space.mit.edu/home/tegmark/mangle/, along with complete documentation and example applications. These new methods should prove quite useful to the astronomical community, and since MANGLE is a generic tool for managing angular masks on a sphere, it has the potential to benefit terrestrial mapmaking applications as well.

  13. The polygonal model: A simple representation of biomolecules as a tool for teaching metabolism.

    PubMed

    Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo

    2018-01-01

    Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student. In this report, we describe the "polygonal model" as a new means of graphically representing biomolecules. This model is based on the use of geometric figures such as open triangles, squares, and circles to represent hydroxyl, carbonyl, and carboxyl groups, respectively. The usefulness of the polygonal model was assessed by undergraduate students in a classroom activity that consisted of "transforming" molecules from Fischer models to polygonal models and vice and versa. The survey was applied to 135 undergraduate Biology and Nursing students. Students found the model easy to use and we noted that it allowed identification of students' misconceptions in basic concepts of organic chemistry, such as in stereochemistry and organic groups that could then be corrected. The students considered the polygonal model easier and faster for representing molecules than Fischer representations, without loss of information. These findings indicate that the polygonal model can facilitate the teaching of metabolism when the structures of biomolecules are discussed. Overall, the polygonal model promoted contact with chemical structures, e.g. through drawing activities, and encouraged student-student dialog, thereby facilitating biochemical learning. © 2017 by The International Union of Biochemistry and Molecular Biology, 46(1):66-75, 2018. © 2017 The International Union of Biochemistry and Molecular Biology.

  14. Duality of caustics in Minkowski billiards

    NASA Astrophysics Data System (ADS)

    Artstein-Avidan, S.; Florentin, D. I.; Ostrover, Y.; Rosen, D.

    2018-04-01

    In this paper we study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth, centrally symmetric, strictly convex body K, for every convex caustic which K possesses, the ‘dual’ billiard dynamics in which the table is the Euclidean unit ball and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics are dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic.

  15. Multi-Stage Convex Relaxation Methods for Machine Learning

    DTIC Science & Technology

    2013-03-01

    Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.

  16. On approximation and energy estimates for delta 6-convex functions.

    PubMed

    Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid

    2018-01-01

    The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.

  17. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  18. Assessing the influence of lower facial profile convexity on perceived attractiveness in the orthognathic patient, clinician, and layperson.

    PubMed

    Naini, Farhad B; Donaldson, Ana Nora A; McDonald, Fraser; Cobourne, Martyn T

    2012-09-01

    The aim was a quantitative evaluation of how the severity of lower facial profile convexity influences perceived attractiveness. The lower facial profile of an idealized image was altered incrementally between 14° to -16°. Images were rated on a Likert scale by orthognathic patients, laypeople, and clinicians. Attractiveness ratings were greater for straight profiles in relation to convex/concave, with no significant difference between convex and concave profiles. Ratings decreased by 0.23 of a level for every degree increase in the convexity angle. Class II/III patients gave significantly reduced ratings of attractiveness and had greater desire for surgery than class I. A straight profile is perceived as most attractive and greater degrees of convexity or concavity deemed progressively less attractive, but a range of 10° to -12° may be deemed acceptable; beyond these values surgical correction is desired. Patients are most critical, and clinicians are more critical than laypeople. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. The spectral positioning algorithm of new spectrum vehicle based on convex programming in wireless sensor network

    NASA Astrophysics Data System (ADS)

    Zhang, Yongjun; Lu, Zhixin

    2017-10-01

    Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.

  20. Stoichiometric control of multiple different tectons in coordination-driven self-assembly: preparation of fused metallacyclic polygons.

    PubMed

    Lee, Junseong; Ghosh, Koushik; Stang, Peter J

    2009-09-02

    We present a general strategy for the synthesis of stable, multicomponent fused polygon complexes in which coordination-driven self-assembly allows for single supramolecular species to be formed from multicomponent self-assembly and the shape of the obtained polygons can be controlled simply by changing the ratio of individual components. The compounds have been characterized by multinuclear NMR spectroscopy and electrospray ionization mass spectrometry.

  1. In-gap corner states in core-shell polygonal quantum rings.

    PubMed

    Sitek, Anna; Ţolea, Mugurel; Niţă, Marian; Serra, Llorenç; Gudmundsson, Vidar; Manolescu, Andrei

    2017-01-10

    We study Coulomb interacting electrons confined in polygonal quantum rings. We focus on the interplay of localization at the polygon corners and Coulomb repulsion. Remarkably, the Coulomb repulsion allows the formation of in-gap states, i.e., corner-localized states of electron pairs or clusters shifted to energies that were forbidden for non-interacting electrons, but below the energies of corner-side-localized states. We specify conditions allowing optical excitation to those states.

  2. In-gap corner states in core-shell polygonal quantum rings

    NASA Astrophysics Data System (ADS)

    Sitek, Anna; Ţolea, Mugurel; Niţă, Marian; Serra, Llorenç; Gudmundsson, Vidar; Manolescu, Andrei

    2017-01-01

    We study Coulomb interacting electrons confined in polygonal quantum rings. We focus on the interplay of localization at the polygon corners and Coulomb repulsion. Remarkably, the Coulomb repulsion allows the formation of in-gap states, i.e., corner-localized states of electron pairs or clusters shifted to energies that were forbidden for non-interacting electrons, but below the energies of corner-side-localized states. We specify conditions allowing optical excitation to those states.

  3. 3-D surface reconstruction of patient specific anatomic data using a pre-specified number of polygons.

    PubMed

    Aharon, S; Robb, R A

    1997-01-01

    Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.

  4. Maximum and minimum entropy states yielding local continuity bounds

    NASA Astrophysics Data System (ADS)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  5. Whistle description of Irrawaddy dolphins (Orcaella brevirostris) in Bay of Brunei, Sarawak, Malaysia.

    PubMed

    Muhamad, Hairul Masrini; Xu, Xiaomei; Zhang, Xuelei; Jaaman, Saifullah Arifin; Muda, Azmi Marzuki

    2018-05-01

    Studies of Irrawaddy dolphins' acoustics assist in understanding the behaviour of the species and thereby conservation of this species. Whistle signals emitted by Irrawaddy dolphin within the Bay of Brunei in Malaysian waters were characterized. A total of 199 whistles were analysed from seven sightings between January and April 2016. Six types of whistles contours named constant, upsweep, downsweep, concave, convex, and sine were detected when the dolphins engaged in traveling, foraging, and socializing activities. The whistle durations ranged between 0.06 and 3.86 s. The minimum frequency recorded was 443 Hz [Mean = 6000 Hz, standard deviation (SD) = 2320 Hz] and the maximum frequency recorded was 16 071 Hz (Mean = 7139 Hz, SD = 2522 Hz). The mean frequency range (F.R.) for the whistles was 1148 Hz (Minimum F.R. = 0 Hz, Maximum F.R. = 4446 Hz; SD = 876 Hz). Whistles in the Bay of Brunei were compared with population recorded from the waters of Matang and Kalimantan. The comparisons showed differences in whistle duration, minimum frequency, start frequency, and number of inflection point. Variation in whistle occurrence and frequency may be associated with surface behaviour, ambient noise, and recording limitation. This will be an important element when planning a monitoring program.

  6. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. The version 3.6b+ UNIX/DISSPLA implementations of PLOT3D (ARC-12788) and PLOT3D/TURB3D (ARC-12778) were developed for use on computers running UNIX SYSTEM 5 with BSD 4.3 extensions. The standard distribution media for each ofthese programs is a 9track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC-12782); (3) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. System 5 is a trademark of Bell Labs, Incorporated. BSD4.3 is a trademark of the University of California at Berkeley. UNIX is a registered trademark of AT&T.

  7. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. The version 3.6b+ UNIX/DISSPLA implementations of PLOT3D (ARC-12788) and PLOT3D/TURB3D (ARC-12778) were developed for use on computers running UNIX SYSTEM 5 with BSD 4.3 extensions. The standard distribution media for each ofthese programs is a 9track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC-12782); (3) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. System 5 is a trademark of Bell Labs, Incorporated. BSD4.3 is a trademark of the University of California at Berkeley. UNIX is a registered trademark of AT&T.

  8. Probabilistic Guidance of Swarms using Sequential Convex Programming

    DTIC Science & Technology

    2014-01-01

    quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using

  9. Rapid figure-ground responses to stereograms reveal an advantage for a convex foreground.

    PubMed

    Bertamini, Marco; Lawson, Rebecca

    2008-01-01

    Convexity has long been recognised as a factor that affects figure - ground segmentation, even when pitted against other factors such as symmetry [Kanizsa and Gerbino, 1976 Art and Artefacts Ed.M Henle (New York: Springer) pp 25-32]. It is accepted in the literature that the difference between concave and convex contours is important for the visual system, and that there is a prior expectation favouring convexities as figure. We used bipartite stimuli and a simple task in which observers had to report whether the foreground was on the left or the right. We report objective evidence that supports the idea that convexity affects figure-ground assignment, even though our stimuli were not pictorial in that depth order was specified unambiguously by binocular disparity.

  10. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  11. Polygonal tundra geomorphological change in response to warming alters future CO2 and CH4 flux on the Barrow Peninsula.

    PubMed

    Lara, Mark J; McGuire, A David; Euskirchen, Eugenie S; Tweedie, Craig E; Hinkel, Kenneth M; Skurikhin, Alexei N; Romanovsky, Vladimir E; Grosse, Guido; Bolton, W Robert; Genet, Helene

    2015-04-01

    The landscape of the Barrow Peninsula in northern Alaska is thought to have formed over centuries to millennia, and is now dominated by ice-wedge polygonal tundra that spans drained thaw-lake basins and interstitial tundra. In nearby tundra regions, studies have identified a rapid increase in thermokarst formation (i.e., pits) over recent decades in response to climate warming, facilitating changes in polygonal tundra geomorphology. We assessed the future impact of 100 years of tundra geomorphic change on peak growing season carbon exchange in response to: (i) landscape succession associated with the thaw-lake cycle; and (ii) low, moderate, and extreme scenarios of thermokarst pit formation (10%, 30%, and 50%) reported for Alaskan arctic tundra sites. We developed a 30 × 30 m resolution tundra geomorphology map (overall accuracy:75%; Kappa:0.69) for our ~1800 km² study area composed of ten classes; drained slope, high center polygon, flat-center polygon, low center polygon, coalescent low center polygon, polygon trough, meadow, ponds, rivers, and lakes, to determine their spatial distribution across the Barrow Peninsula. Land-atmosphere CO2 and CH4 flux data were collected for the summers of 2006-2010 at eighty-two sites near Barrow, across the mapped classes. The developed geomorphic map was used for the regional assessment of carbon flux. Results indicate (i) at present during peak growing season on the Barrow Peninsula, CO2 uptake occurs at -902.3 10(6) gC-CO2 day(-1) (uncertainty using 95% CI is between -438.3 and -1366 10(6) gC-CO2 day(-1)) and CH4 flux at 28.9 10(6) gC-CH4 day(-1) (uncertainty using 95% CI is between 12.9 and 44.9 10(6) gC-CH4 day(-1)), (ii) one century of future landscape change associated with the thaw-lake cycle only slightly alter CO2 and CH4 exchange, while (iii) moderate increases in thermokarst pits would strengthen both CO2 uptake (-166.9 10(6) gC-CO2 day(-1)) and CH4 flux (2.8 10(6) gC-CH4 day(-1)) with geomorphic change from low to high center polygons, cumulatively resulting in an estimated negative feedback to warming during peak growing season. © 2014 John Wiley & Sons Ltd.

  12. A review of accuracy assessment for object-based image analysis: From per-pixel to per-polygon approaches

    NASA Astrophysics Data System (ADS)

    Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul

    2018-07-01

    Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.

  13. Evapotranspiration across plant types and geomorphological units in polygonal Arctic tundra

    NASA Astrophysics Data System (ADS)

    Raz-Yaseef, Naama; Young-Robertson, Jessica; Rahn, Thom; Sloan, Victoria; Newman, Brent; Wilson, Cathy; Wullschleger, Stan D.; Torn, Margaret S.

    2017-10-01

    Coastal tundra ecosystems are relatively flat, and yet display large spatial variability in ecosystem traits. The microtopographical differences in polygonal geomorphology produce heterogeneity in permafrost depth, soil temperature, soil moisture, soil geochemistry, and plant distribution. Few measurements have been made, however, of how water fluxes vary across polygonal tundra plant types, limiting our ability to understand and model these ecosystems. Our objective was to investigate how plant distribution and geomorphological location affect actual evapotranspiration (ET). These effects are especially critical in light of the rapid change polygonal tundra systems are experiencing with Arctic warming. At a field site near Barrow, Alaska, USA, we investigated the relationships between ET and plant cover in 2014 and 2015. ET was measured at a range of spatial and temporal scales using: (1) An eddy covariance flux tower for continuous landscape-scale monitoring; (2) An automated clear surface chamber over dry vegetation in a fixed location for continuous plot-scale monitoring; and (3) Manual measurements with a clear portable chamber in approximately 60 locations across the landscape. We found that variation in environmental conditions and plant community composition, driven by microtopographical features, has significant influence on ET. Among plant types, ET from moss-covered and inundated areas was more than twice that from other plant types. ET from troughs and low polygonal centers was significantly higher than from high polygonal centers. ET varied seasonally, with peak fluxes of 0.14 mm h-1 in July. Despite 24 hours of daylight in summer, diurnal fluctuations in incoming solar radiation and plant processes produced a diurnal cycle in ET. Combining the patterns we observed with projections for the impact of permafrost degradation on polygonal structure suggests that microtopographic changes associated with permafrost thaw have the potential to alter tundra ecosystem ET.

  14. Modeling the spatio-temporal variability in subsurface thermal regimes across a low-relief polygonal tundra landscape: Modeling Archive

    DOE Data Explorer

    Kumar, Jitendra; Collier, Nathan; Bisht, Gautam; Mills, Richard T.; Thornton, Peter E.; Iversen, Colleen M.; Romanovsky, Vladimir

    2016-01-27

    This Modeling Archive is in support of an NGEE Arctic discussion paper under review and available at http://www.the-cryosphere-discuss.net/tc-2016-29/. Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to atmosphere under warming climate. Ice--wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. The microtopography plays a critical role in regulating the fine scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behaviour under current as well as changing climate. We present here an end-to-end effort for high resolution numerical modeling of thermal hydrology at real-world field sites, utilizing the best available data to characterize and parameterize the models. We develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites at Barrow, Alaska spanning across low to transitional to high-centered polygon and representative of broad polygonal tundra landscape. A multi--phase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using high resolution LiDAR DEM, microtopographic features of the landscape were characterized and represented in the high resolution model mesh. Best available soil data from field observations and literature was utilized to represent the complex hetogeneous subsurface in the numerical model. This data collection provides the complete set of input files, forcing data sets and computational meshes for simulations using PFLOTRAN for four sites at Barrow Environmental Observatory. It also document the complete computational workflow for this modeling study to allow verification, reproducibility and follow up studies.

  15. In-gap corner states in core-shell polygonal quantum rings

    PubMed Central

    Sitek, Anna; Ţolea, Mugurel; Niţă, Marian; Serra, Llorenç; Gudmundsson, Vidar; Manolescu, Andrei

    2017-01-01

    We study Coulomb interacting electrons confined in polygonal quantum rings. We focus on the interplay of localization at the polygon corners and Coulomb repulsion. Remarkably, the Coulomb repulsion allows the formation of in-gap states, i.e., corner-localized states of electron pairs or clusters shifted to energies that were forbidden for non-interacting electrons, but below the energies of corner-side-localized states. We specify conditions allowing optical excitation to those states. PMID:28071750

  16. Point Relay Scanner Utilizing Ellipsoidal Mirrors

    NASA Technical Reports Server (NTRS)

    Manhart, Paul K. (Inventor); Pagano, Robert J. (Inventor)

    1997-01-01

    A scanning system uses a polygonal mirror assembly with each facet of the polygon having an ellipsoidal mirror located thereon. One focal point of each ellipsoidal mirror is located at a common point on the axis of rotation of the polygonal mirror assembly. As the mirror assembly rotates. a second focal point of the ellipsoidal mirrors traces out a scan line. The scanner can be utilized for scanned output display of information or for scanning information to be detected.

  17. Analysis of Elastic and Electrical Fields in Quantum Structures by Novel Green’s Functions and Related Boundary Integral Methods

    DTIC Science & Technology

    2010-12-01

    arbitrarily shaped polygon QWR inclusion/inhomogeneity with eigenstrain ∗Ijγ in an anisotropic substrate... eigenstrain *ijγ is applied to the QWR which is an arbitrarily shaped polygon .................................. 42 3.2 A square InAs QWR embedded in...the QWR domain V and to 0 outside. Figure 2.1 An arbitrarily shaped polygon QWR inclusion/inhomogeneity with eigenstrain ∗Ijγ in an anisotropic

  18. Global regularizing flows with topology preservation for active contours and polygons.

    PubMed

    Sundaramoorthi, Ganesh; Yezzi, Anthony

    2007-03-01

    Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature.

  19. Global and local indicators of spatial association between points and polygons: A study of land use change

    NASA Astrophysics Data System (ADS)

    Guo, Luo; Du, Shihong; Haining, Robert; Zhang, Lianjun

    2013-04-01

    The existing indicators related to spatial association, especially the K function, can measure only the same dimension of vector data, such as points, lines and polygons, respectively. We develop four new indicators that can analyze and model spatial association for the mixture of different dimensions of vector data, such as lines and points, points and polygons, lines and polygons. The four indicators can measure the spatial association between points and polygons from both global and local perspectives. We also apply the presented methods to investigate the association of temples and villages on land-use change at multiple distance scales in the Guoluo Tibetan Autonomous Prefecture in Qinghai Province, PR China. Global indicators show that temples are positively associated with land-use change at large spatial distances (e.g., >6000 m), while the association between villages and land-use change is insignificant at all distance scales. Thus temples, as religious and cultural centers, have a stronger association with land-use change than the places where people live. However, local indicators show that these associations vary significantly in different sub-areas of the study region. Furthermore, the association of temples with land-use change is also dependent on the specific type of land-use change. The case study demonstrates that the presented indicators are powerful tools for analyzing the spatial association between points and polygons.

  20. Robust polygon recognition method with similarity invariants applied to star identification

    NASA Astrophysics Data System (ADS)

    Hernández, E. Antonio; Alonso, Miguel A.; Chávez, Edgar; Covarrubias, David H.; Conte, Roberto

    2017-02-01

    In the star identification process the goal is to recognize a star by using the celestial bodies in its vicinity as context. An additional requirement is to avoid having to perform an exhaustive scan of the star database. In this paper we present a novel approach to star identification using similarity invariants. More specifically, the proposed algorithm defines a polygon for each star, using the neighboring celestial bodies in the field of view as vertices. The mapping is insensitive to similarity transformation; that is, the image of the polygon under the transformation is not affected by rotation, scaling or translations. Each polygon is associated with an essentially unique complex number. We perform an exhaustive experimental validation of the proposed algorithm using synthetic data generated from the star catalog with uniformly-distributed positional noise introduced to each star. The star identification method that we present is proven to be robust, achieving a recognition rate of 99.68% when noise levels of up to ± 424 μ radians are introduced to the location of the stars. In our tests the proposed algorithm proves that if a polygon match is found, it always corresponds to the star under analysis; no mismatches are found. In its present form our method cannot identify polygons in cases where there exist missing or false stars in the analyzed images, in those situations it only indicates that no match was found.

  1. Area collapse algorithm computing new curve of 2D geometric objects

    NASA Astrophysics Data System (ADS)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  2. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  3. Area-based tests for association between spatial patterns

    NASA Astrophysics Data System (ADS)

    Maruca, Susan L.; Jacquez, Geoffrey M.

    Edge effects pervade natural systems, and the processes that determine spatial heterogeneity (e.g. physical, geochemical, biological, ecological factors) occur on diverse spatial scales. Hence, tests for association between spatial patterns should be unbiased by edge effects and be based on null spatial models that incorporate the spatial heterogeneity characteristic of real-world systems. This paper develops probabilistic pattern association tests that are appropriate when edge effects are present, polygon size is heterogeneous, and the number of polygons varies from one classification to another. The tests are based on the amount of overlap between polygons in each of two partitions. Unweighted and area-weighted versions of the statistics are developed and verified using scenarios representing both polygon overlap and avoidance at different spatial scales and for different distributions of polygon sizes. These statistics were applied to Soda Butte Creek, Wyoming, to determine whether stream microhabitats, such as riffles, pools and glides, can be identified remotely using high spatial resolution hyperspectral imagery. These new ``spatially explicit'' techniques provide information and insights that cannot be obtained from the spectral information alone.

  4. The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities

    NASA Astrophysics Data System (ADS)

    Cain, George L., Jr.; González, Luis

    2008-02-01

    The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.

  5. Generation of oculomotor images during tasks requiring visual recognition of polygons.

    PubMed

    Olivier, G; de Mendoza, J L

    2001-06-01

    This paper concerns the contribution of mentally simulated ocular exploration to generation of a visual mental image. In Exp. 1, repeated exploration of the outlines of an irregular decagon allowed an incidental learning of the shape. Analyses showed subjects memorized their ocular movements rather than the polygon. In Exp. 2, exploration of a reversible figure such as a Necker cube varied in opposite directions. Then, both perspective possibilities are presented. The perspective the subjects recognized depended on the way they explored the ambiguous figure. In both experiments, during recognition the subjects recalled a visual mental image of the polygon they compared with the different polygons proposed for recognition. To interpret the data, hypotheses concerning common processes underlying both motor intention of ocular movements and generation of a visual image are suggested.

  6. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  7. The Band around a Convex Body

    ERIC Educational Resources Information Center

    Swanson, David

    2011-01-01

    We give elementary proofs of formulas for the area and perimeter of a planar convex body surrounded by a band of uniform thickness. The primary tool is a integral formula for the perimeter of a convex body which describes the perimeter in terms of the projections of the body onto lines in the plane.

  8. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  9. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  10. Curved butterfly bileaflet prosthetic cardiac valve

    DOEpatents

    McQueen, David M.; Peskin, Charles S.

    1991-06-25

    An annular valve body having a central passageway for the flow of blood therethrough with two curved leaflets each of which is pivotally supported on an accentric positioned axis in the central passageway for moving between a closed position and an open position. The leaflets are curved in a plane normal to the eccentric axis and positioned with the convex side of the leaflets facing each other when the leaflets are in the open position. Various parameters such as the curvature of the leaflets, the location of the eccentric axis, and the maximum opening angle of the leaflets are optimized according to the following performance criteria: maximize the minimum peak velocity through the valve, maximize the net stroke volume, and minimize the mean forward pressure difference, thereby reducing thrombosis and improving the hemodynamic performance.

  11. Shaping asteroid models using genetic evolution (SAGE)

    NASA Astrophysics Data System (ADS)

    Bartczak, P.; Dudziński, G.

    2018-02-01

    In this work, we present SAGE (shaping asteroid models using genetic evolution), an asteroid modelling algorithm based solely on photometric lightcurve data. It produces non-convex shapes, orientations of the rotation axes and rotational periods of asteroids. The main concept behind a genetic evolution algorithm is to produce random populations of shapes and spin-axis orientations by mutating a seed shape and iterating the process until it converges to a stable global minimum. We tested SAGE on five artificial shapes. We also modelled asteroids 433 Eros and 9 Metis, since ground truth observations for them exist, allowing us to validate the models. We compared the derived shape of Eros with the NEAR Shoemaker model and that of Metis with adaptive optics and stellar occultation observations since other models from various inversion methods were available for Metis.

  12. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  13. Methods to determine the growth domain in a multidimensional environmental space.

    PubMed

    Le Marc, Yvan; Pin, Carmen; Baranyi, József

    2005-04-15

    Data from a database on microbial responses to the food environment (ComBase, see www.combase.cc) were used to study the boundary of growth several pathogens (Aeromonas hydrophila, Escherichia coli, Listeria monocytogenes, Yersinia enterocolitica). Two methods were used to evaluate the growth/no growth interface. The first one is an application of the Minimum Convex Polyhedron (MCP) introduced by Baranyi et al. [Baranyi, J., Ross, T., McMeekin, T., Roberts, T.A., 1996. The effect of parameterisation on the performance of empirical models used in Predictive Microbiology. Food Microbiol. 13, 83-91.]. The second method applies logistic regression to define the boundary of growth. The combination of these two different techniques can be a useful tool to handle the problem of extrapolation of predictive models at the growth limits.

  14. Polar Polygons

    NASA Technical Reports Server (NTRS)

    2005-01-01

    18 August 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows dark-outlined polygons on a frost-covered surface in the south polar region of Mars. In summer, this surface would not be bright and the polygons would not have dark outlines--these are a product of the presence of seasonal frost.

    Location near: 77.2oS, 204.8oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Spring

  15. Polygons of global undersea features for geographic searches

    USGS Publications Warehouse

    Hartwell, Stephen R.; Wingfield, Dana K.; Allwardt, Alan O.; Lightsom, Frances L.; Wong, Florence L.

    2018-01-01

    A shapefile of 311 undersea features from all major oceans and seas has been created as an aid for retrieving georeferenced information resources. Geospatial information systems with the capability to search user-defined, polygonal geographic areas will be able to utilize this shapefile or secondary products derived from it, such as linked data based on well-known text representations of the individual polygons within the shapefile. Version 1.1 of this report also includes a linked data representation of 299 of these features and their spatial extents.

  16. Operating System Support for Mobile Interactive Applications

    DTIC Science & Technology

    2002-08-01

    Buckingham Palace (inte- rior) Scene Number of polygons Taj Mahal 127406 Café 138598 Notre Dame 160206 Buckingham Palace 235572...d em an d (m ill io ns o f c yc le s) ª Number of polygons rendered Taj Mahal Café Notre Dame Buckingham Palace (a) Random camera position 0 100 200...Notre Dame Buckingham Palace (b) Fixed camera position The « -axis is the number of polygons rendered, i.e. ¬G­ where ¬ is the original model size

  17. ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM

    NASA Technical Reports Server (NTRS)

    Hibbard, E. A.

    1994-01-01

    Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver); and an SGI IRIS 4D running IRIX (no native device driver). Currently with version 7.0 of ARCGRAPH, the VDI library supports the following output devices: A VT100 terminal with a RETRO-GRAPHICS board installed, a VT240 using the Tektronix 4010 emulation capability, an SGI IRIS turbo using the native GL2 library, a Tektronix 4010, a Tektronix 4105, and the Tektronix 4014. ARCGRAPH version 7.0 was developed in 1988.

  18. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    NASA Astrophysics Data System (ADS)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  19. Modeling the spatiotemporal variability in subsurface thermal regimes across a low-relief polygonal tundra landscape

    DOE PAGES

    Kumar, Jitendra; Collier, Nathan; Bisht, Gautam; ...

    2016-09-27

    Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to the atmosphere under warming climate scenarios. Ice-wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. This microtopography plays a critical role in regulating the fine-scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behavior under the current as well as changing climate. Here, we present an end-to-end effort for high-resolution numerical modeling of thermal hydrology at real-world fieldmore » sites, utilizing the best available data to characterize and parameterize the models. We also develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites near Barrow, Alaska, spanning across low to transitional to high-centered polygons, representing a broad polygonal tundra landscape. A multiphase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using a high-resolution lidar digital elevation model (DEM), microtopographic features of the landscape were characterized and represented in the high-resolution model mesh. The best available soil data from field observations and literature were utilized to represent the complex heterogeneous subsurface in the numerical model. Simulation results demonstrate the ability of the developed modeling approach to capture – without recourse to model calibration – several aspects of the complex thermal regimes across the sites, and provide insights into the critical role of polygonal tundra microtopography in regulating the thermal dynamics of the carbon-rich permafrost soils. Moreover, areas of significant disagreement between model results and observations highlight the importance of field-based observations of soil thermal and hydraulic properties for modeling-based studies of permafrost thermal dynamics, and provide motivation and guidance for future observations that will help address model and data gaps affecting our current understanding of the system.« less

  20. Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis

    PubMed Central

    Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong

    2012-01-01

    Standing posterior–anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI–ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm3 and 256.9 ± 42.6 cm3 at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI–ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. PMID:22133294

  1. Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis.

    PubMed

    Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong

    2012-02-01

    Standing posterior-anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI-ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm(3) and 256.9 ± 42.6 cm(3) at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI-ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society.

  2. On the convexity of ROC curves estimated from radiological test results

    PubMed Central

    Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.

    2010-01-01

    Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155

  3. 12 CFR 905.26 - Official logo and seal.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... (a) Description. The logo is a disc with its center consisting of three polygons arranged in an irregular line partially overlapping—each polygon drawn in a manner resembling a silhouette of a pitched...

  4. Long-term repetition priming with symmetrical polygons and words.

    PubMed

    Kersteen-Tucker, Z

    1991-01-01

    In two different tasks, subjects were asked to make lexical decisions (word or nonword) and symmetry judgments (symmetrical or nonsymmetrical) about two-dimensional polygons. In both tasks, every stimulus was repeated at one of four lags (0, 1, 4, or 8 items interposed between the first and second stimulus presentations). This paradigm, known as repetition priming, revealed comparable short-term priming (Lag 0) and long-term priming (Lags 1, 4, and 8) both for symmetrical polygons and for words. A shorter term component (Lags 0 and 1) of priming was observed for nonwords, and only very short-term priming (Lag 0) was observed for nonsymmetrical polygons. These results indicate that response facilitation accruing from repeated exposure can be observed for stimuli that have no preexisting memory representations and suggest that perceptual factors contribute to repetition-priming effects.

  5. The finite cell method for polygonal meshes: poly-FCM

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2016-10-01

    In the current article, we extend the two-dimensional version of the finite cell method (FCM), which has so far only been used for structured quadrilateral meshes, to unstructured polygonal discretizations. Therefore, the adaptive quadtree-based numerical integration technique is reformulated and the notion of generalized barycentric coordinates is introduced. We show that the resulting polygonal (poly-)FCM approach retains the optimal rates of convergence if and only if the geometry of the structure is adequately resolved. The main advantage of the proposed method is that it inherits the ability of polygonal finite elements for local mesh refinement and for the construction of transition elements (e.g. conforming quadtree meshes without hanging nodes). These properties along with the performance of the poly-FCM are illustrated by means of several benchmark problems for both static and dynamic cases.

  6. Analytical approach of laser beam propagation in the hollow polygonal light pipe.

    PubMed

    Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong

    2013-08-10

    An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.

  7. Application of modified VICAR/IBIS GIS to analysis of July 1991 Flevoland AIRSAR data

    NASA Technical Reports Server (NTRS)

    Norikane, L.; Broek, B.; Freeman, A.

    1992-01-01

    Three overflights of the Flevoland calibration/agricultural site were made by the JPL Airborne Synthetic Aperture Radar (AIRSAR) on 3, 12, and 28 July 1991 as part of MAC-Europe '92. A polygon map was generated at TNO-FEL which overlayed the slant range projected July 3 data set. Each polygon was identified by a sequence of points and a crop label. The polygon map was composed of 452 uniquely identified polygons and 15 different crop types. Analysis of the data was done using our modified Video Image Communication and Retrieval/Image Based Information System Geographic Information System (VICAR/IBIS GIS). This GIS is an extension of the VICAR/IBIS GIS first developed by Bryant in the 1970's which is itself an extension of the VICAR image processing system also developed at JPL.

  8. Spectral analysis of point-vortex dynamics: first application to vortex polygons in a circular domain

    NASA Astrophysics Data System (ADS)

    Speetjens, M. F. M.; Meleshko, V. V.; van Heijst, G. J. F.

    2014-06-01

    The present study addresses the classical problem of the dynamics and stability of a cluster of N-point vortices of equal strength arranged in a polygonal configuration (‘N-vortex polygons’). In unbounded domains, such N-vortex polygons are unconditionally stable for N\\leqslant 7. Confinement in a circular domain tightens the stability conditions to N\\leqslant 6 and a maximum polygon size relative to the domain radius. This work expands on existing studies on stability and integrability by a first giving an exploratory spectral analysis of the dynamics of N vortex polygons in circular domains. Key to this is that the spectral signature of the time evolution of vortex positions reflects their qualitative behaviour. Expressing vortex motion by a generic evolution operator (the so-called Koopman operator) provides a rigorous framework for such spectral analyses. This paves the way to further differentiation and classification of point-vortex behaviour beyond stability and integrability. The concept of Koopman-based spectral analysis is demonstrated for N-vortex polygons. This reveals that conditional stability can be seen as a local form of integrability and confirms an important generic link between spectrum and dynamics: discrete spectra imply regular (quasi-periodic) motion; continuous (sub-)spectra imply chaotic motion. Moreover, this exposes rich nonlinear dynamics as intermittency between regular and chaotic motion and quasi-coherent structures formed by chaotic vortices. Dedicated to the memory of Slava Meleshko, a dear friend and inspiring colleague.

  9. Influence of polygonal wear of railway wheels on the wheel set axle stress

    NASA Astrophysics Data System (ADS)

    Wu, Xingwen; Chi, Maoru; Wu, Pingbo

    2015-11-01

    The coupled vehicle/track dynamic model with the flexible wheel set was developed to investigate the effects of polygonal wear on the dynamic stresses of the wheel set axle. In the model, the railway vehicle was modelled by the rigid multibody dynamics. The wheel set was established by the finite element method to analyse the high-frequency oscillation and dynamic stress of wheel set axle induced by the polygonal wear based on the modal stress recovery method. The slab track model was taken into account in which the rail was described by the Timoshenko beam and the three-dimensional solid finite element was employed to establish the concrete slab. Furthermore, the modal superposition method was adopted to calculate the dynamic response of the track. The wheel/rail normal forces and the tangent forces were, respectively, determined by the Hertz nonlinear contact theory and the Shen-Hedrick-Elkins model. Using the coupled vehicle/track dynamic model, the dynamic stresses of wheel set axle with consideration of the ideal polygonal wear and measured polygonal wear were investigated. The results show that the amplitude of wheel/rail normal forces and the dynamic stress of wheel set axle increase as the vehicle speeds rise. Moreover, the impact loads induced by the polygonal wear could excite the resonance of wheel set axle. In the resonance region, the amplitude of the dynamic stress for the wheel set axle would increase considerably comparing with the normal conditions.

  10. Investigations into the shape-preserving interpolants using symbolic computation

    NASA Technical Reports Server (NTRS)

    Lam, Maria

    1988-01-01

    Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.

  11. Congruency effects in dot comparison tasks: convex hull is more important than dot area.

    PubMed

    Gilmore, Camilla; Cragg, Lucy; Hogan, Grace; Inglis, Matthew

    2016-11-16

    The dot comparison task, in which participants select the more numerous of two dot arrays, has become the predominant method of assessing Approximate Number System (ANS) acuity. Creation of the dot arrays requires the manipulation of visual characteristics, such as dot size and convex hull. For the task to provide a valid measure of ANS acuity, participants must ignore these characteristics and respond on the basis of number. Here, we report two experiments that explore the influence of dot area and convex hull on participants' accuracy on dot comparison tasks. We found that individuals' ability to ignore dot area information increases with age and display time. However, the influence of convex hull information remains stable across development and with additional time. This suggests that convex hull information is more difficult to inhibit when making judgements about numerosity and therefore it is crucial to control this when creating dot comparison tasks.

  12. Space ultra-vacuum facility and method of operation

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J. (Inventor)

    1988-01-01

    A wake shield space processing facility (10) for maintaining ultra-high levels of vacuum is described. The wake shield (12) is a truncated hemispherical section having a convex side (14) and a concave side (24). Material samples (68) to be processed are located on the convex side of the shield, which faces in the wake direction in operation in orbit. Necessary processing fixtures (20) and (22) are also located on the convex side. Support equipment including power supplies (40, 42), CMG package (46) and electronic control package (44) are located on the convex side (24) of the shield facing the ram direction. Prior to operation in orbit the wake shield is oriented in reverse with the convex side facing the ram direction to provide cleaning by exposure to ambient atomic oxygen. The shield is then baked-out by being pointed directed at the sun to obtain heating for a suitable period.

  13. Polygonal tundra geomorphological change in response to warming alters future CO2 and CH4 flux on the Barrow Peninsula

    USGS Publications Warehouse

    Lara, Mark J.; McGuire, A. David; Euskirchen, Eugénie S.; Tweedie, Craig E.; Hinkel, Kenneth M.; Skurikhin, Alexei N.; Romanovsky, Vladimir E.; Grosse, Guido; Bolton, W. Robert; Genet, Helene

    2015-01-01

    The landscape of the Barrow Peninsula in northern Alaska is thought to have formed over centuries to millennia, and is now dominated by ice-wedge polygonal tundra that spans drained thaw-lake basins and interstitial tundra. In nearby tundra regions, studies have identified a rapid increase in thermokarst formation (i.e., pits) over recent decades in response to climate warming, facilitating changes in polygonal tundra geomorphology. We assessed the future impact of 100 years of tundra geomorphic change on peak growing season carbon exchange in response to: (i) landscape succession associated with the thaw-lake cycle; and (ii) low, moderate, and extreme scenarios of thermokarst pit formation (10%, 30%, and 50%) reported for Alaskan arctic tundra sites. We developed a 30 × 30 m resolution tundra geomorphology map (overall accuracy:75%; Kappa:0.69) for our ~1800 km² study area composed of ten classes; drained slope, high center polygon, flat-center polygon, low center polygon, coalescent low center polygon, polygon trough, meadow, ponds, rivers, and lakes, to determine their spatial distribution across the Barrow Peninsula. Land-atmosphere CO2 and CH4 flux data were collected for the summers of 2006–2010 at eighty-two sites near Barrow, across the mapped classes. The developed geomorphic map was used for the regional assessment of carbon flux. Results indicate (i) at present during peak growing season on the Barrow Peninsula, CO2 uptake occurs at -902.3 106gC-CO2 day−1(uncertainty using 95% CI is between −438.3 and −1366 106gC-CO2 day−1) and CH4 flux at 28.9 106gC-CH4 day−1(uncertainty using 95% CI is between 12.9 and 44.9 106gC-CH4 day−1), (ii) one century of future landscape change associated with the thaw-lake cycle only slightly alter CO2 and CH4 exchange, while (iii) moderate increases in thermokarst pits would strengthen both CO2uptake (−166.9 106gC-CO2 day−1) and CH4 flux (2.8 106gC-CH4 day−1) with geomorphic change from low to high center polygons, cumulatively resulting in an estimated negative feedback to warming during peak growing season.

  14. Skin injury model classification based on shape vector analysis

    PubMed Central

    2012-01-01

    Background: Skin injuries can be crucial in judicial decision making. Forensic experts base their classification on subjective opinions. This study investigates whether known classes of simulated skin injuries are correctly classified statistically based on 3D surface models and derived numerical shape descriptors. Methods: Skin injury surface characteristics are simulated with plasticine. Six injury classes – abrasions, incised wounds, gunshot entry wounds, smooth and textured strangulation marks as well as patterned injuries - with 18 instances each are used for a k-fold cross validation with six partitions. Deformed plasticine models are captured with a 3D surface scanner. Mean curvature is estimated for each polygon surface vertex. Subsequently, distance distributions and derived aspect ratios, convex hulls, concentric spheres, hyperbolic points and Fourier transforms are used to generate 1284-dimensional shape vectors. Subsequent descriptor reduction maximizing SNR (signal-to-noise ratio) result in an average of 41 descriptors (varying across k-folds). With non-normal multivariate distribution of heteroskedastic data, requirements for LDA (linear discriminant analysis) are not met. Thus, shrinkage parameters of RDA (regularized discriminant analysis) are optimized yielding a best performance with λ = 0.99 and γ = 0.001. Results: Receiver Operating Characteristic of a descriptive RDA yields an ideal Area Under the Curve of 1.0for all six categories. Predictive RDA results in an average CRR (correct recognition rate) of 97,22% under a 6 partition k-fold. Adding uniform noise within the range of one standard deviation degrades the average CRR to 71,3%. Conclusions: Digitized 3D surface shape data can be used to automatically classify idealized shape models of simulated skin injuries. Deriving some well established descriptors such as histograms, saddle shape of hyperbolic points or convex hulls with subsequent reduction of dimensionality while maximizing SNR seem to work well for the data at hand, as predictive RDA results in CRR of 97,22%. Objective basis for discrimination of non-overlapping hypotheses or categories are a major issue in medicolegal skin injury analysis and that is where this method appears to be strong. Technical surface quality is important in that adding noise clearly degrades CRR. Trial registration: This study does not cover the results of a controlled health care intervention as only plasticine was used. Thus, there was no trial registration. PMID:23497357

  15. Use of laterally placed vacuum drains for management of aural hematomas in five dogs.

    PubMed

    Pavletic, Michael M

    2015-01-01

    5 dogs (a Newfoundland, Golden Retriever, Shiba Inu, Staffordshire Terrier, and Vizsla) were referred for evaluation and treatment of unilateral aural hematomas within a week after their formation. Aural hematomas involved the left (3) or right (2) ears. With patients under anesthesia, the aural hematomas were approached surgically from the convex, or lateral, pinnal surface. Two small incisions were used to position a vacuum drain into the incised hematoma cavity. The drain exited at the base of the pinna and adjacent cervical skin. The free end of the drain was attached to a vacuum reservoir for 18 to 21 days. Drains and skin sutures were removed at this time along with the protective Elizabethan collar. All hematomas resolved and surgical sites healed during the minimum 6-month follow-up period. Cosmetic results were considered excellent in 4 of 5 patients. Slight wrinkling of the pinna in 1 patient resulted from asymmetric enlargement of the cartilaginous walls of the hematoma, where vacuum application resulted in a slight folding of the redundant lateral cartilage wall. The described treatment was efficient, economical, and minimally invasive and required no bandaging or wound care. Placement of the drain tubing on the convex (lateral) aspect sheltered the system from displacement by patients with an Elizabethan collar in place. Overall cosmetic results were excellent; asymmetric enlargement of the cartilaginous walls of the hematoma with slight folding of the pinna was seen in 1 patient.

  16. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  17. An algorithm for calculating minimum Euclidean distance between two geographic features

    NASA Astrophysics Data System (ADS)

    Peuquet, Donna J.

    1992-09-01

    An efficient algorithm is presented for determining the shortest Euclidean distance between two features of arbitrary shape that are represented in quadtree form. These features may be disjoint point sets, lines, or polygons. It is assumed that the features do not overlap. Features also may be intertwined and polygons may be complex (i.e. have holes). Utilizing a spatial divide-and-conquer approach inherent in the quadtree data model, the basic rationale is to narrow-in on portions of each feature quickly that are on a facing edge relative to the other feature, and to minimize the number of point-to-point Euclidean distance calculations that must be performed. Besides offering an efficient, grid-based alternative solution, another unique and useful aspect of the current algorithm is that is can be used for rapidly calculating distance approximations at coarser levels of resolution. The overall process can be viewed as a top-down parallel search. Using one list of leafcode addresses for each of the two features as input, the algorithm is implemented by successively dividing these lists into four sublists for each descendant quadrant. The algorithm consists of two primary phases. The first determines facing adjacent quadrant pairs where part or all of the two features are separated between the two quadrants, respectively. The second phase then determines the closest pixel-level subquadrant pairs within each facing quadrant pair at the lowest level. The key element of the second phase is a quick estimate distance heuristic for further elimination of locations that are not as near as neighboring locations.

  18. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    PubMed

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Splineless coupling means

    DOEpatents

    Heitmann, Arnold M.; Lord, Jr., Richard E.

    1982-01-01

    In the first embodiment, the invention comprises an imperforate turbine wheel having a hub of polygonal cross-section engageable with a hollow shaft of polygonal conformation, and a thrust collar and bolt for fastening the shaft and wheel together.

  20. Inhibitory competition in figure-ground perception: context and convexity.

    PubMed

    Peterson, Mary A; Salvagio, Elizabeth

    2008-12-15

    Convexity has long been considered a potent cue as to which of two regions on opposite sides of an edge is the shaped figure. Experiment 1 shows that for a single edge, there is only a weak bias toward seeing the figure on the convex side. Experiments 1-3 show that the bias toward seeing the convex side as figure increases as the number of edges delimiting alternating convex and concave regions increases, provided that the concave regions are homogeneous in color. The results of Experiments 2 and 3 rule out a probability summation explanation for these context effects. Taken together, the results of Experiments 1-3 show that the homogeneity versus heterogeneity of the convex regions is irrelevant. Experiment 4 shows that homogeneity of alternating regions is not sufficient for context effects; a cue that favors the perception of the intervening regions as figures is necessary. Thus homogeneity alone does not alone operate as a background cue. We interpret our results within a model of figure-ground perception in which shape properties on opposite sides of an edge compete for representation and the competitive strength of weak competitors is further reduced when they are homogeneous.

  1. Natural-Scene Statistics Predict How the Figure–Ground Cue of Convexity Affects Human Depth Perception

    PubMed Central

    Fowlkes, Charless C.; Banks, Martin S.

    2010-01-01

    The shape of the contour separating two regions strongly influences judgments of which region is “figure” and which is “ground.” Convexity and other figure–ground cues are generally assumed to indicate only which region is nearer, but nothing about how much the regions are separated in depth. To determine the depth information conveyed by convexity, we examined natural scenes and found that depth steps across surfaces with convex silhouettes are likely to be larger than steps across surfaces with concave silhouettes. In a psychophysical experiment, we found that humans exploit this correlation. For a given binocular disparity, observers perceived more depth when the near surface's silhouette was convex rather than concave. We estimated the depth distributions observers used in making those judgments: they were similar to the natural-scene distributions. Our findings show that convexity should be reclassified as a metric depth cue. They also suggest that the dichotomy between metric and nonmetric depth cues is false and that the depth information provided many cues should be evaluated with respect to natural-scene statistics. Finally, the findings provide an explanation for why figure–ground cues modulate the responses of disparity-sensitive cells in visual cortex. PMID:20505093

  2. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  3. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  4. Subsurface Temperature, Moisture, Thermal Conductivity and Heat Flux, Barrow, Area A, B, C, D

    DOE Data Explorer

    Cable, William; Romanovsky, Vladimir

    2014-03-31

    Subsurface temperature data are being collected along a transect from the center of the polygon through the trough (and to the center of the adjacent polygon for Area D). Each transect has five 1.5m vertical array thermistor probes with 16 thermistors each. This dataset also includes soil pits that have been instrumented for temperature, water content, thermal conductivity, and heat flux at the permafrost table. Area C has a shallow borehole of 2.5 meters depth is instrumented in the center of the polygon.

  5. Refining image segmentation by polygon skeletonization

    NASA Technical Reports Server (NTRS)

    Clarke, Keith C.

    1987-01-01

    A skeletonization algorithm was encoded and applied to a test data set of land-use polygons taken from a USGS digital land use dataset at 1:250,000. The distance transform produced by this method was instrumental in the description of the shape, size, and level of generalization of the outlines of the polygons. A comparison of the topology of skeletons for forested wetlands and lakes indicated that some distinction based solely upon the shape properties of the areas is possible, and may be of use in an intelligent automated land cover classification system.

  6. Polygon Patterns

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-511, 12 October 2003

    This August 2003 Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows polygon patterns, enhanced by frost in the cracks that outline the polygon forms, in the south polar region of Mars. On Earth, patterns such as this usually indicate the presence of ice in the subsurface. The same might be true for Mars. This picture is located near 70.6oS, 309.5oW, and covers an area 3 km (1.9 mi) wide. The image is illuminated by sunlight from the upper left.

  7. Indications of Subsurface Ice: Polygons on the Northern Plains

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Someone's kitchen floor? A stone patio?This picture actually does show a floor--the floor of an old impact crater on the northern plains of Mars. Each 'tile' is somewhat larger than a football field. Polygonal patterns are familiar to Mars geologists because they are also common in arctic and antarctic environments on Earth. Typically, such polygons result from the stresses induced in frozen ground by the freeze-thaw cycles of subsurface ice. This picture was taken by MOC in May 1999 and is illuminated from the lower left.

  8. Coverage of Continuous Regions in Euclidean Space Using Homogeneous Resources with Application to the Allocation of the Phased Array Radar Systems

    DTIC Science & Technology

    2011-06-01

    centered at OPi with radius β. Let δ ∗ be the coverage level of the entire polygon P , AP be the area of polygon P , APi be the area of sub-polygon...30 1104 512 1120 992 512 297 1.72 40 600 512 480 544 480 167 2.87 80 158 128 120 136 120 42 2.86 120 70 32 70 62 32 19 1.68 160 38 32 30 34 30 11

  9. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  10. Piecewise convexity of artificial neural networks.

    PubMed

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jitendra; Collier, Nathan; Bisht, Gautam

    Vast carbon stocks stored in permafrost soils of Arctic tundra are under risk of release to the atmosphere under warming climate scenarios. Ice-wedge polygons in the low-gradient polygonal tundra create a complex mosaic of microtopographic features. This microtopography plays a critical role in regulating the fine-scale variability in thermal and hydrological regimes in the polygonal tundra landscape underlain by continuous permafrost. Modeling of thermal regimes of this sensitive ecosystem is essential for understanding the landscape behavior under the current as well as changing climate. Here, we present an end-to-end effort for high-resolution numerical modeling of thermal hydrology at real-world fieldmore » sites, utilizing the best available data to characterize and parameterize the models. We also develop approaches to model the thermal hydrology of polygonal tundra and apply them at four study sites near Barrow, Alaska, spanning across low to transitional to high-centered polygons, representing a broad polygonal tundra landscape. A multiphase subsurface thermal hydrology model (PFLOTRAN) was developed and applied to study the thermal regimes at four sites. Using a high-resolution lidar digital elevation model (DEM), microtopographic features of the landscape were characterized and represented in the high-resolution model mesh. The best available soil data from field observations and literature were utilized to represent the complex heterogeneous subsurface in the numerical model. Simulation results demonstrate the ability of the developed modeling approach to capture – without recourse to model calibration – several aspects of the complex thermal regimes across the sites, and provide insights into the critical role of polygonal tundra microtopography in regulating the thermal dynamics of the carbon-rich permafrost soils. Moreover, areas of significant disagreement between model results and observations highlight the importance of field-based observations of soil thermal and hydraulic properties for modeling-based studies of permafrost thermal dynamics, and provide motivation and guidance for future observations that will help address model and data gaps affecting our current understanding of the system.« less

  12. Effects of Unsaturated Microtopography on Nitrate Concentrations in Tundra Ecosystems: Examples from Polygonal Terrain and Degraded Peat Plateaus

    NASA Astrophysics Data System (ADS)

    Heikoop, J. M.; Arendt, C. A.; Newman, B. D.; Charsley-Groffman, L.; Perkins, G.; Wilson, C. J.; Wullschleger, S.

    2017-12-01

    Under the auspices of the Next Generation Ecosystem Experiment - Arctic, we have been studying hydrogeochemical signals in Alaskan tundra ecosystems underlain by continuous permafrost (Barrow Environmental Observatory (BEO)) and discontinuous permafrost (Seward Peninsula). The Barrow site comprises largely saturated tundra associated with the low gradient Arctic Coastal Plain. Polygonal microtopography, however, can result in slightly raised areas that are unsaturated. In these areas we have previously demonstrated production and accumulation of nitrate, which, based on nitrate isotopic analysis, derives from microbial degradation. Our Seward Peninsula site is located in a much steeper and generally well-drained watershed. In lower-gradient areas at the top and bottom of the watershed, however, the tundra is generally saturated, likely because of the presence of underlying discontinuous permafrost inhibiting infiltration. These settings also contain microtopographic features, though in the form of degraded peat plateaus surrounded by wet graminoid sag ponds. Despite being very different microtopographic features in a very different setting with distinct vegetation, qualitatively similar nitrate accumulation patterns as seen in polygonal terrain were observed. The highest nitrate pore water concentration observed in an unsaturated peat plateau was approximately 5 mg/L, whereas subsurface pore water concentrations in surrounding sag ponds were generally below the limit of detection. Nitrate isotopes indicate this nitrate results from microbial mineralization and nitrification based on comparison to the nitrate isotopic composition of reduced nitrogen sources in the environment and the oxygen isotope composition of site pore water. Nitrate concentrations were most similar to those found in low-center polygon rims and flat-centered polygon centers at the BEO, but were significantly lower than the maximum concentrations seen in the highest and driest polygonal features, the centers of high-centered polygons. Combined, these results suggest that moisture content is a significant control on nitrate production and accumulation in tundra ecosystems and that unsaturated microtopography represents hot spots for microbial decomposition.

  13. Microtopographic control on the ground thermal regime in ice wedge polygons

    NASA Astrophysics Data System (ADS)

    Abolt, Charles J.; Young, Michael H.; Atchley, Adam L.; Harp, Dylan R.

    2018-06-01

    The goal of this research is to constrain the influence of ice wedge polygon microtopography on near-surface ground temperatures. Ice wedge polygon microtopography is prone to rapid deformation in a changing climate, and cracking in the ice wedge depends on thermal conditions at the top of the permafrost; therefore, feedbacks between microtopography and ground temperature can shed light on the potential for future ice wedge cracking in the Arctic. We first report on a year of sub-daily ground temperature observations at 5 depths and 9 locations throughout a cluster of low-centered polygons near Prudhoe Bay, Alaska, and demonstrate that the rims become the coldest zone of the polygon during winter, due to thinner snowpack. We then calibrate a polygon-scale numerical model of coupled thermal and hydrologic processes against this dataset, achieving an RMSE of less than 1.1 °C between observed and simulated ground temperature. Finally, we conduct a sensitivity analysis of the model by systematically manipulating the height of the rims and the depth of the troughs and tracking the effects on ice wedge temperature. The results indicate that winter temperatures in the ice wedge are sensitive to both rim height and trough depth, but more sensitive to rim height. Rims act as preferential outlets of subsurface heat; increasing rim size decreases winter temperatures in the ice wedge. Deeper troughs lead to increased snow entrapment, promoting insulation of the ice wedge. The potential for ice wedge cracking is therefore reduced if rims are destroyed or if troughs subside, due to warmer conditions in the ice wedge. These findings can help explain the origins of secondary ice wedges in modern and ancient polygons. The findings also imply that the potential for re-establishing rims in modern thermokarst-affected terrain will be limited by reduced cracking activity in the ice wedges, even if regional air temperatures stabilize.

  14. Splineless coupling means

    DOEpatents

    Heitmann, A.M.; Lord, R.E. Jr.

    1982-07-20

    In the first embodiment, the invention comprises an imperforate turbine wheel having a hub of polygonal cross-section engageable with a hollow shaft of polygonal conformation, and a thrust collar and bolt for fastening the shaft and wheel together. 4 figs.

  15. Polygons on a rotating fluid surface.

    PubMed

    Jansson, Thomas R N; Haspang, Martin P; Jensen, Kåre H; Hersen, Pascal; Bohr, Tomas

    2006-05-05

    We report a novel and spectacular instability of a fluid surface in a rotating system. In a flow driven by rotating the bottom plate of a partially filled, stationary cylindrical container, the shape of the free surface can spontaneously break the axial symmetry and assume the form of a polygon rotating rigidly with a speed different from that of the plate. With water, we have observed polygons with up to 6 corners. It has been known for many years that such flows are prone to symmetry breaking, but apparently the polygonal surface shapes have never been observed. The creation of rotating internal waves in a similar setup was observed for much lower rotation rates, where the free surface remains essentially flat [J. M. Lopez, J. Fluid Mech. 502, 99 (2004). We speculate that the instability is caused by the strong azimuthal shear due to the stationary walls and that it is triggered by minute wobbling of the rotating plate.

  16. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  17. Pizza again? On the division of polygons into sections with a common origin

    NASA Astrophysics Data System (ADS)

    Sinitsky, Ilya; Stupel, Moshe; Sinitsky, Marina

    2018-02-01

    The paper explores the division of a polygon into equal-area pieces using line segments originating at a common point. The mathematical background of the proposed method is very simple and belongs to secondary school geometry. Simple examples dividing a square into two, four or eight congruent pieces provide a starting point to discovering how to divide a regular polygon into any number of equal-area pieces using line segments originating from the centre. Moreover, it turns out that there are infinite ways to do the division. Discovering the basic invariant involved allows application of the same procedure to divide any tangential polygon, as after suitable adjustment, it can be used also for rectangles and parallelograms. Further generalization offers many additional solutions of the problem, and some of them are presented for the case of an arbitrary triangle and a square. Links to dynamic demonstrations in GeoGebra serve to illustrate the main results.

  18. Landscape topography structures the soil microbiome in arctic polygonal tundra.

    PubMed

    Taş, Neslihan; Prestat, Emmanuel; Wang, Shi; Wu, Yuxin; Ulrich, Craig; Kneafsey, Timothy; Tringe, Susannah G; Torn, Margaret S; Hubbard, Susan S; Jansson, Janet K

    2018-02-22

    In the Arctic, environmental factors governing microbial degradation of soil carbon (C) in active layer and permafrost are poorly understood. Here we determined the functional potential of soil microbiomes horizontally and vertically across a cryoperturbed polygonal landscape in Alaska. With comparative metagenomics, genome binning of novel microbes, and gas flux measurements we show that microbial greenhouse gas (GHG) production is strongly correlated to landscape topography. Active layer and permafrost harbor contrasting microbiomes, with increasing amounts of Actinobacteria correlating with decreasing soil C in permafrost. While microbial functions such as fermentation and methanogenesis were dominant in wetter polygons, in drier polygons genes for C mineralization and CH 4 oxidation were abundant. The active layer microbiome was poised to assimilate N and not to release N 2 O, reflecting low N 2 O flux measurements. These results provide mechanistic links of microbial metabolism to GHG fluxes that are needed for the refinement of model predictions.

  19. Spiders from Mars?

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-426, 19 July 2003

    No, this is not a picture of a giant, martian spider web. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a plethora of polygonal features on the floor of a northern hemisphere impact crater near 65.6oN, 327.7oW. The picture was acquired during spring, after the seasonal carbon dioxide frost cap had largely migrated through the region. At the time the picture was taken, remnants of seasonal frost remained on the crater rim and on the edges of the troughs that bound each of the polygons. Frost often provides a helpful hint as to where polygons and patterned ground occur. The polygons, if they were on Earth, would indicate the presence of freeze-thaw cycles in ground ice. Although uncertain, the same might be true of Mars. Sunlight illuminates the scene from the lower left.

  20. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  1. Fast algorithms of constrained Delaunay triangulation and skeletonization for band images

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Yang, ChengLei; Meng, XiangXu; Yang, YiJun; Yang, XiuKun

    2004-09-01

    For the boundary polygons of band-images, a fast constrained Delaunay triangulation algorithm is presented and based on it an efficient skeletonization algorithm is designed. In the process of triangulation the characters of uniform grid structure and the band-polygons are utilized to improve the speed of computing the third vertex for one edge within its local ranges when forming a Delaunay triangle. The final skeleton of the band-image is derived after reducing each triangle to local skeleton lines according to its topology. The algorithm with a simple data structure is easy to understand and implement. Moreover, it can deal with multiply connected polygons on the fly. Experiments show that there is a nearly linear dependence between triangulation time and size of band-polygons randomly generated. Correspondingly, the skeletonization algorithm is also an improvement over the previously known results in terms of time. Some practical examples are given in the paper.

  2. 3D DEM analyses of the 1963 Vajont rock slide

    NASA Astrophysics Data System (ADS)

    Boon, Chia Weng; Houlsby, Guy; Utili, Stefano

    2013-04-01

    The 1963 Vajont rock slide has been modelled using the distinct element method (DEM). The open-source DEM code, YADE (Kozicki & Donzé, 2008), was used together with the contact detection algorithm proposed by Boon et al. (2012). The critical sliding friction angle at the slide surface was sought using a strength reduction approach. A shear-softening contact model was used to model the shear resistance of the clayey layer at the slide surface. The results suggest that the critical sliding friction angle can be conservative if stability analyses are calculated based on the peak friction angles. The water table was assumed to be horizontal and the pore pressure at the clay layer was assumed to be hydrostatic. The influence of reservoir filling was marginal, increasing the sliding friction angle by only 1.6˚. The results of the DEM calculations were found to be sensitive to the orientations of the bedding planes and cross-joints. Finally, the failure mechanism was investigated and arching was found to be present at the bend of the chair-shaped slope. References Boon C.W., Houlsby G.T., Utili S. (2012). A new algorithm for contact detection between convex polygonal and polyhedral particles in the discrete element method. Computers and Geotechnics, vol 44, 73-82, doi.org/10.1016/j.compgeo.2012.03.012. Kozicki, J., & Donzé, F. V. (2008). A new open-source software developed for numerical simulations using discrete modeling methods. Computer Methods in Applied Mechanics and Engineering, 197(49-50), 4429-4443.

  3. On equivalent characterizations of convexity of functions

    NASA Astrophysics Data System (ADS)

    Gkioulekas, Eleftherios

    2013-04-01

    A detailed development of the theory of convex functions, not often found in complete form in most textbooks, is given. We adopt the strict secant line definition as the definitive definition of convexity. We then show that for differentiable functions, this definition becomes logically equivalent with the first derivative monotonicity definition and the tangent line definition. Consequently, for differentiable functions, all three characterizations are logically equivalent.

  4. Asymmetric Bulkheads for Cylindrical Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Ford, Donald B.

    2007-01-01

    Asymmetric bulkheads are proposed for the ends of vertically oriented cylindrical pressure vessels. These bulkheads, which would feature both convex and concave contours, would offer advantages over purely convex, purely concave, and flat bulkheads (see figure). Intended originally to be applied to large tanks that hold propellant liquids for launching spacecraft, the asymmetric-bulkhead concept may also be attractive for terrestrial pressure vessels for which there are requirements to maximize volumetric and mass efficiencies. A description of the relative advantages and disadvantages of prior symmetric bulkhead configurations is prerequisite to understanding the advantages of the proposed asymmetric configuration: In order to obtain adequate strength, flat bulkheads must be made thicker, relative to concave and convex bulkheads; the difference in thickness is such that, other things being equal, pressure vessels with flat bulkheads must be made heavier than ones with concave or convex bulkheads. Convex bulkhead designs increase overall tank lengths, thereby necessitating additional supporting structure for keeping tanks vertical. Concave bulkhead configurations increase tank lengths and detract from volumetric efficiency, even though they do not necessitate additional supporting structure. The shape of a bulkhead affects the proportion of residual fluid in a tank that is, the portion of fluid that unavoidably remains in the tank during outflow and hence cannot be used. In this regard, a flat bulkhead is disadvantageous in two respects: (1) It lacks a single low point for optimum placement of an outlet and (2) a vortex that forms at the outlet during outflow prevents a relatively large amount of fluid from leaving the tank. A concave bulkhead also lacks a single low point for optimum placement of an outlet. Like purely concave and purely convex bulkhead configurations, the proposed asymmetric bulkhead configurations would be more mass-efficient than is the flat bulkhead configuration. In comparison with both purely convex and purely concave configurations, the proposed asymmetric configurations would offer greater volumetric efficiency. Relative to a purely convex bulkhead configuration, the corresponding asymmetric configuration would result in a shorter tank, thus demanding less supporting structure. An asymmetric configuration provides a low point for optimum location of a drain, and the convex shape at the drain location minimizes the amount of residual fluid.

  5. Fast and Exact Fiber Surfaces for Tetrahedral Meshes.

    PubMed

    Klacansky, Pavol; Tierny, Julien; Carr, Hamish; Zhao Geng

    2017-07-01

    Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results.

  6. Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images

    NASA Astrophysics Data System (ADS)

    Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.

    2017-05-01

    Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  7. SU-D-BRA-04: Computerized Framework for Marker-Less Localization of Anatomical Feature Points in Range Images Based On Differential Geometry Features for Image-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Arimura, H; Toyofuku, F

    Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less

  8. Determinants of the biomechanical and radiological outcome of surgical correction of adolescent idiopathic scoliosis surgery: the role of rod properties and patient characteristics.

    PubMed

    Giudici, Fabrizio; Galbusera, Fabio; Zagra, Antonino; Wilke, Hans-Joachim; Archetti, Marino; Scaramuzzo, Laura

    2017-10-01

    Aim of the study was to evaluate the role of the mechanical properties of the rod and of the characteristics of the patients (age, skeletal maturity, BMI, and Lenke type) in determining the deformity correction, its maintenance over time and the risk of mechanical failure of the instrumentation. From March 2011 to December 2014 120 patients affected by AIS underwent posterior instrumented fusion. Two 5.5-mm CoCr rods were implanted in all patients. For every patient, age, sex, Risser grade, Lenke type curve, flexibility of the main curve, body mass index (BMI), and percentage of correction were recorded. In all patients, the Cobb angle value and rod curvature angle (RC) were evaluated. RC changes were registered and correlated to each factor to establish a possible statistically significance in a multivariate analysis. A biomechanical model was constructed to study the influence of rod diameter and material as well as the density of the anchoring implants in determining stress and deformation of rods after contouring and implantation. Radiographic and biomechanical analysis showed a different mean rod deformation for concave and convex side: 7.8° and 3.9°, respectively. RC mean value at immediate follow-up was 21.8° for the concave side and 14.6° for the convex. At 2-year minimum follow-up, RC value increases 1.5° only for the concave side. At 3.5-year mean follow-up, RC value increases 2.7°, p = 0.003, for the concave side and 1.3° for the convex, p = 0.06. The use of the stiffest material as well as of the lowest diameter resulted in higher stresses in the rods. The use of either a low or a high instrumentation density resulted only in minor differences in the loss of correction. Rod diameter and material as well as patient characteristics such as BMI, age, and Risser grade play an important role in deformity correction and its maintenance over time.

  9. Convex-hull mass estimates of the dodo (Raphus cucullatus): application of a CT-based mass estimation technique

    PubMed Central

    O’Mahoney, Thomas G.; Kitchener, Andrew C.; Manning, Phillip L.; Sellers, William I.

    2016-01-01

    The external appearance of the dodo (Raphus cucullatus, Linnaeus, 1758) has been a source of considerable intrigue, as contemporaneous accounts or depictions are rare. The body mass of the dodo has been particularly contentious, with the flightless pigeon alternatively reconstructed as slim or fat depending upon the skeletal metric used as the basis for mass prediction. Resolving this dichotomy and obtaining a reliable estimate for mass is essential before future analyses regarding dodo life history, physiology or biomechanics can be conducted. Previous mass estimates of the dodo have relied upon predictive equations based upon hind limb dimensions of extant pigeons. Yet the hind limb proportions of dodo have been found to differ considerably from those of their modern relatives, particularly with regards to midshaft diameter. Therefore, application of predictive equations to unusually robust fossil skeletal elements may bias mass estimates. We present a whole-body computed tomography (CT) -based mass estimation technique for application to the dodo. We generate 3D volumetric renders of the articulated skeletons of 20 species of extant pigeons, and wrap minimum-fit ‘convex hulls’ around their bony extremities. Convex hull volume is subsequently regressed against mass to generate predictive models based upon whole skeletons. Our best-performing predictive model is characterized by high correlation coefficients and low mean squared error (a = − 2.31, b = 0.90, r2 = 0.97, MSE = 0.0046). When applied to articulated composite skeletons of the dodo (National Museums Scotland, NMS.Z.1993.13; Natural History Museum, NHMUK A.9040 and S/1988.50.1), we estimate eviscerated body masses of 8–10.8 kg. When accounting for missing soft tissues, this may equate to live masses of 10.6–14.3 kg. Mass predictions presented here overlap at the lower end of those previously published, and support recent suggestions of a relatively slim dodo. CT-based reconstructions provide a means of objectively estimating mass and body segment properties of extinct species using whole articulated skeletons. PMID:26788418

  10. Telidon Videotex presentation level protocol: Augmented picture description instructions

    NASA Astrophysics Data System (ADS)

    Obrien, C. D.; Brown, H. G.; Smirle, J. C.; Lum, Y. F.; Kukulka, J. Z.; Kwan, A.

    1982-02-01

    The Telidon Videotex system is a method by which graphic and textual information and transactional services can be accessed from information sources by the general public. In order to transmit information to a Telidon terminal at a minimum bandwidth, and in a manner independent of the type of communications channel, a coding scheme was devised which permits the encoding of a picture into the geometric drawing elements which compose it. These picture description instructions are an alpha geometric coding model and are based on the primitives of POINT, LINE, ARC, RECTANGLE, POLYGON, and INCREMENT. Text is encoded as (ASCII) characters along with a supplementary table of accents and special characters. A mosaic shape table is included for compatibility. A detailed specification of the coding scheme and a description of the principles which make it independent of communications channel and display hardware are provided.

  11. Elmo bumpy square plasma confinement device

    DOEpatents

    Owen, L.W.

    1985-01-01

    The invention is an Elmo bumpy type plasma confinement device having a polygonal configuration of closed magnet field lines for improved plasma confinement. In the preferred embodiment, the device is of a square configuration which is referred to as an Elmo bumpy square (EBS). The EBS is formed by four linear magnetic mirror sections each comprising a plurality of axisymmetric assemblies connected in series and linked by 90/sup 0/ sections of a high magnetic field toroidal solenoid type field generating coils. These coils provide corner confinement with a minimum of radial dispersion of the confined plasma to minimize the detrimental effects of the toroidal curvature of the magnetic field. Each corner is formed by a plurality of circular or elliptical coils aligned about the corner radius to provide maximum continuity in the closing of the magnetic field lines about the square configuration confining the plasma within a vacuum vessel located within the various coils forming the square configuration confinement geometry.

  12. Convexity and concavity constants in Lorentz and Marcinkiewicz spaces

    NASA Astrophysics Data System (ADS)

    Kaminska, Anna; Parrish, Anca M.

    2008-07-01

    We provide here the formulas for the q-convexity and q-concavity constants for function and sequence Lorentz spaces associated to either decreasing or increasing weights. It yields also the formula for the q-convexity constants in function and sequence Marcinkiewicz spaces. In this paper we extent and enhance the results from [G.J.O. Jameson, The q-concavity constants of Lorentz sequence spaces and related inequalities, Math. Z. 227 (1998) 129-142] and [A. Kaminska, A.M. Parrish, The q-concavity and q-convexity constants in Lorentz spaces, in: Banach Spaces and Their Applications in Analysis, Conference in Honor of Nigel Kalton, May 2006, Walter de Gruyter, Berlin, 2007, pp. 357-373].

  13. Convexity of quantum χ2-divergence.

    PubMed

    Hansen, Frank

    2011-06-21

    The general quantum χ(2)-divergence has recently been introduced by Temme et al. [Temme K, Kastoryano M, Ruskai M, Wolf M, Verstrate F (2010) J Math Phys 51:122201] and applied to quantum channels (quantum Markov processes). The quantum χ(2)-divergence is not unique, as opposed to the classical χ(2)-divergence, but depends on the choice of quantum statistics. It was noticed that the elements in a particular one-parameter family of quantum χ(2)-divergences are convex functions in the density matrices (ρ,σ), thus mirroring the convexity of the classical χ(2)(p,q)-divergence in probability distributions (p,q). We prove that any quantum χ(2)-divergence is a convex function in its two arguments.

  14. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  15. Entropy and convexity for nonlinear partial differential equations

    PubMed Central

    Ball, John M.; Chen, Gui-Qiang G.

    2013-01-01

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue. PMID:24249768

  16. Stochastic Dual Algorithm for Voltage Regulation in Distribution Networks with Discrete Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan

    This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less

  17. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  18. Entropy and convexity for nonlinear partial differential equations.

    PubMed

    Ball, John M; Chen, Gui-Qiang G

    2013-12-28

    Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue.

  19. The roles of the convex hull and the number of potential intersections in performance on visually presented traveling salesperson problems.

    PubMed

    Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter

    2003-10-01

    The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.

  20. Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He

    2013-09-01

    In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.

  1. Variational energy principle for compressible, baroclinic flow. 1: First and second variations of total kinetic action

    NASA Technical Reports Server (NTRS)

    Schmid, L. A.

    1977-01-01

    The case of a cold gas in the absence of external force fields is considered. Since the only energy involved is kinetic energy, the total kinetic action (i.e., the space-time integral of the kinetic energy density) should serve as the total free-energy functional in this case, and as such should be a local minimum for all possible fluctuations about stable flow. This conjecture is tested by calculating explicit, manifestly covariant expressions for the first and second variations of the total kinetic action in the context of Lagrangian kinematics. The general question of the correlation between physical stability and the convexity of any action integral that can be interpreted as the total free-energy functional of the flow is discussed and illustrated for the cases of rectillinear and rotating shearing flows.

  2. Convexity Conditions and the Legendre-Fenchel Transform for the Product of Finitely Many Positive Definite Quadratic Forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Yunbin, E-mail: zhaoyy@maths.bham.ac.u

    2010-12-15

    While the product of finitely many convex functions has been investigated in the field of global optimization, some fundamental issues such as the convexity condition and the Legendre-Fenchel transform for the product function remain unresolved. Focusing on quadratic forms, this paper is aimed at addressing the question: When is the product of finitely many positive definite quadratic forms convex, and what is the Legendre-Fenchel transform for it? First, we show that the convexity of the product is determined intrinsically by the condition number of so-called 'scaled matrices' associated with quadratic forms involved. The main result claims that if the conditionmore » number of these scaled matrices are bounded above by an explicit constant (which depends only on the number of quadratic forms involved), then the product function is convex. Second, we prove that the Legendre-Fenchel transform for the product of positive definite quadratic forms can be expressed, and the computation of the transform amounts to finding the solution to a system of equations (or equally, finding a Brouwer's fixed point of a mapping) with a special structure. Thus, a broader question than the open 'Question 11' in Hiriart-Urruty (SIAM Rev. 49, 225-273, 2007) is addressed in this paper.« less

  3. A parallel Discrete Element Method to model collisions between non-convex particles

    NASA Astrophysics Data System (ADS)

    Rakotonirina, Andriarimina Daniel; Delenne, Jean-Yves; Wachs, Anthony

    2017-06-01

    In many dry granular and suspension flow configurations, particles can be highly non-spherical. It is now well established in the literature that particle shape affects the flow dynamics or the microstructure of the particles assembly in assorted ways as e.g. compacity of packed bed or heap, dilation under shear, resistance to shear, momentum transfer between translational and angular motions, ability to form arches and block the flow. In this talk, we suggest an accurate and efficient way to model collisions between particles of (almost) arbitrary shape. For that purpose, we develop a Discrete Element Method (DEM) combined with a soft particle contact model. The collision detection algorithm handles contacts between bodies of various shape and size. For nonconvex bodies, our strategy is based on decomposing a non-convex body into a set of convex ones. Therefore, our novel method can be called "glued-convex method" (in the sense clumping convex bodies together), as an extension of the popular "glued-spheres" method, and is implemented in our own granular dynamics code Grains3D. Since the whole problem is solved explicitly, our fully-MPI parallelized code Grains3D exhibits a very high scalability when dynamic load balancing is not required. In particular, simulations on up to a few thousands cores in configurations involving up to a few tens of millions of particles can readily be performed. We apply our enhanced numerical model to (i) the collapse of a granular column made of convex particles and (i) the microstructure of a heap of non-convex particles in a cylindrical reactor.

  4. Polygon patterns on Europa

    NASA Technical Reports Server (NTRS)

    Smalley, I. J.

    1981-01-01

    The formation of polygon patterns in the development of crack networks in cooling basalt flows and similar contracting systems, and under natural conditions in an essentially unbounded basalt flow, are analyzed, and the characteristics of hexagonal and pentagonal patterns in isotropic stress fields are discussed.

  5. CONNECTICUT SURFACE WATER QUALITY CLASSIFICATIONS

    EPA Science Inventory

    This is a 1:24,000-scale datalayer of Surface Water Quality Classifications for Connecticut. It is comprised of two 0Shapefiles with line and polygon features. Both Shapefiles must be used together with the Hydrography datalayer. The polygon Shapefile includes surface water qual...

  6. Polygon Pictures in QuarkXPress.

    ERIC Educational Resources Information Center

    Osterer, Irv

    1999-01-01

    Describes an activity where students draw and fill simple and complex shapes by utilizing the polygon tool in QuarkXPress to create graphics. Explains that this activity enables students to learn how to use a variety of functions in the QuarkXPress program. (CMK)

  7. Polygons and Craters

    NASA Technical Reports Server (NTRS)

    2005-01-01

    3 September 2005 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows polygons enhanced by subliming seasonal frost in the martian south polar region. Polygons similar to these occur in frozen ground at high latitudes on Earth, suggesting that perhaps their presence on Mars is also a sign that there is or once was ice in the shallow subsurface. The circular features are degraded meteor impact craters.

    Location near: 72.2oS, 310.3oW Image width: width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Spring

  8. Slow relaxation in weakly open rational polygons.

    PubMed

    Kokshenev, Valery B; Vicentini, Eduardo

    2003-07-01

    The interplay between the regular (piecewise-linear) and irregular (vertex-angle) boundary effects in nonintegrable rational polygonal billiards (of m equal sides) is discussed. Decay dynamics in polygons (of perimeter P(m) and small opening Delta) is analyzed through the late-time survival probability S(m) approximately equal t(-delta). Two distinct slow relaxation channels are established. The primary universal channel exhibits relaxation of regular sliding orbits, with delta=1. The secondary channel is given by delta>1 and becomes open when m>P(m)/Delta. It originates from vertex order-disorder dual effects and is due to relaxation of chaoticlike excitations.

  9. ROE Carbon Storage - Forest Biomass

    EPA Pesticide Factsheets

    This polygon dataset depicts the density of forest biomass in counties across the United States, in terms of metric tons of carbon per square mile of land area. These data were provided in spreadsheet form by the U.S. Department of Agriculture (USDA) Forest Service. To produce the Web mapping application, EPA joined the spreadsheet with a shapefile of U.S. county (and county equivalent) boundaries downloaded from the U.S. Census Bureau. EPA calculated biomass density based on the area of each county polygon. These data sets were converted into a single polygon feature class inside a file geodatabase.

  10. Binary space partitioning trees and their uses

    NASA Technical Reports Server (NTRS)

    Bell, Bradley N.

    1989-01-01

    Binary Space Partitioning (BSP) trees have some qualities that make them useful in solving many graphics related problems. The purpose is to describe what a BSP tree is, and how it can be used to solve the problem of hidden surface removal, and constructive solid geometry. The BSP tree is based on the idea that a plane acting as a divider subdivides space into two parts with one being on the positive side and the other on the negative. A polygonal solid is then represented as the volume defined by the collective interior half spaces of the solid's bounding surfaces. The nature of how the tree is organized lends itself well for sorting polygons relative to an arbitrary point in 3 space. The speed at which the tree can be traversed for depth sorting is fast enough to provide hidden surface removal at interactive speeds. The fact that a BSP tree actually represents a polygonal solid as a bounded volume also makes it quite useful in performing the boolean operations used in constructive solid geometry. Due to the nature of the BSP tree, polygons can be classified as they are subdivided. The ability to classify polygons as they are subdivided can enhance the simplicity of implementing constructive solid geometry.

  11. Polygonal crack patterns by drying thin films under quasi-two-dimensional confinement

    NASA Astrophysics Data System (ADS)

    Ma, Xiaolei; Lowensohn, Janna; Burton, Justin

    Cracks patterns such as T/Y junction cracks in dried mud are ubiquitous in nature. Although the conditions for cracking in solids is well-known, cracks in colloidal and granular systems are more complex. Here we report the formations of polygonal cracks by drying thin films of corn starch ( 10 μm in diameter) under quasi-2D confinement. We find there are two drying stages before the films are completely dried. Initially, a compaction front invades throughout the film. Then, a second drying stage ''percolates'' throughout the film with a characteristic branching pattern, leading to a dense packing of particles connected by liquid capillary bridges. Finally, polygonal cracks appear as the remaining liquid dries. The same drying kinetics occur for films with different thickness, h, except that fractal-like fracture patterns form in thin films, where the thickness is comparable to the particle size, while polygons form in thick films with many layers of particles. We also find that the average area of the polygons, A, in fully dried films scales with the thickness, A hβ , where β 1 . 5 , and the prefactor depends on the initial packing fraction of the suspension. This form is consistent with a simple energy balance criterion for crack formation.

  12. Nanopatterning by molecular polygons.

    PubMed

    Jester, Stefan-S; Sigmund, Eva; Höger, Sigurd

    2011-07-27

    Molecular polygons with three to six sides and binary mixtures thereof form long-range ordered patterns at the TCB/HOPG interface. This includes also the 2D crystallization of pentagons. The results provide an insight into how the symmetry of molecules is translated into periodic structures.

  13. Ergodicity of two hard balls in integrable polygons

    NASA Astrophysics Data System (ADS)

    Bálint, Péter; Troubetzkoy, Serge

    2004-11-01

    We prove the hyperbolicity, ergodicity and thus the Bernoulli property of two hard balls in one of the following four polygons: the square, the equilateral triangle, the 45°-45°-90° triangle or the 30°-60°-90° triangle.

  14. Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization

    NASA Astrophysics Data System (ADS)

    Kolosnitsyn, A. V.

    2018-02-01

    The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.

  15. Another convex combination of product states for the separable Werner state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028

    2006-03-15

    In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less

  16. Thermal Protection System with Staggered Joints

    NASA Technical Reports Server (NTRS)

    Simon, Xavier D. (Inventor); Robinson, Michael J. (Inventor); Andrews, Thomas L. (Inventor)

    2014-01-01

    The thermal protection system disclosed herein is suitable for use with a spacecraft such as a reentry module or vehicle, where the spacecraft has a convex surface to be protected. An embodiment of the thermal protection system includes a plurality of heat resistant panels, each having an outer surface configured for exposure to atmosphere, an inner surface opposite the outer surface and configured for attachment to the convex surface of the spacecraft, and a joint edge defined between the outer surface and the inner surface. The joint edges of adjacent ones of the heat resistant panels are configured to mate with each other to form staggered joints that run between the peak of the convex surface and the base section of the convex surface.

  17. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  18. The concave cusp as a determiner of figure-ground.

    PubMed

    Stevens, K A; Brookes, A

    1988-01-01

    The tendency to interpret as figure, relative to background, those regions that are lighter, smaller, and, especially, more convex is well known. Wherever convex opaque objects abut or partially occlude one another in an image, the points of contact between the silhouettes form concave cusps, each indicating the local assignment of figure versus ground across the contour segments. It is proposed that this local geometric feature is a preattentive determiner of figure-ground perception and that it contributes to the previously observed tendency for convexity preference. Evidence is presented that figure-ground assignment can be determined solely on the basis of the concave cusp feature, and that the salience of the cusp derives from local geometry and not from adjacent contour convexity.

  19. Turf Hummocks in Arctic Canada: Characteristics and Development

    NASA Astrophysics Data System (ADS)

    Tarnocai, C.; Walker, D. A.; Broll, G.

    2006-12-01

    Turf hummocks, which occur commonly in the Arctic, were studied in three ecoclimatic regions, ranging from Banks Island in the Mid-Arctic, through Ellef Ringnes and Prince Patrick islands in the Oceanic High Arctic, to Ellesmere Island in the High Arctic. These hummocks are dome-shaped features that generally occur on 5- 20% slopes and are associated with silty loam soils. They are generally 11-40 cm high, 18-60 cm in diameter, and have a thaw depth of 30-50 cm. The organic carbon and total nitrogen contents of the organic-rich soil horizons are high. Soil temperatures under the tops of hummocks are 3-5°C higher than under the adjoining interhummock troughs. The combination of these factors provides a much more favorable soil environment for biological activity, including plant growth, than does the surrounding area. The vegetation cover on these turf hummocks is dominantly mosses and lichens with Luzula sp. on Prince Patrick and Ellef Ringnes islands and Dryas integrifolia and Cassiope tetragona on Banks and Ellesmere islands. The development of turf hummocks is usually initiated by small polygons, whose diameters determine the initial diameters of the hummocks. Establishment of vegetation on these small polygons provides the next step in their development and, if eolian material is available, the vegetation captures this material and the hummock builds up. The internal morphology of turf hummocks reveals multiple buried, organic-rich layers, representing former hummock surfaces. The stone- and gravel-free silty loam composing the soil horizons between these organic- rich layers is very different from the underlying materials composing the former small polygon. These soil horizons also contain a high amount of well-decomposed organic matter that is dispersed uniformly throughout the horizons. Radiocarbon dates for the buried organic layers suggest a gradual build-up process in which the age of the organic layers increases with depth. A minimum of 1200-2000 years is required for the turf hummocks to develop to their present stage. Data obtained from the multiple organic-rich layers suggest that each former hummock surface was stable for 100 years or more. This paper provides information about the internal and external morphology and thermal properties of the turf hummocks, and a model for their development.

  20. Terrain modeling for real-time simulation

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; McArthur, Donald E.

    1993-10-01

    There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.

Top