NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skala, Vaclav
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less
Performance Analysis of the Unitree Central File
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Flater, David
1994-01-01
This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.
Scalable Metropolis Monte Carlo for simulation of hard shapes
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.
2016-07-01
We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Long, Leroy L; Srinivasan, Manoj
2013-04-06
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.
Improving the growth of CZT crystals for radiation detectors: a modeling perspective
NASA Astrophysics Data System (ADS)
Derby, Jeffrey J.; Zhang, Nan; Yeckel, Andrew
2012-10-01
The availability of large, single crystals of cadmium zinc telluride (CZT) with uniform properties is key to improving the performance of gamma radiation detectors fabricated from them. Towards this goal, we discuss results obtained by computational models that provide a deeper understanding of crystal growth processes and how the growth of CZT can be improved. In particular, we discuss methods that may be implemented to lessen the deleterious interactions between the ampoule wall and the growing crystal via engineering a convex solidification interface. For vertical Bridgman growth, a novel, bell-curve furnace temperature profile is predicted to achieve macroscopically convex solid-liquid interface shapes during melt growth of CZT in a multiple-zone furnace. This approach represents a significant advance over traditional gradient-freeze profiles, which always yield concave interface shapes, and static heat transfer designs, such as pedestal design, that achieve convex interfaces over only a small portion of the growth run. Importantly, this strategy may be applied to any Bridgman configuration that utilizes multiple, controllable heating zones. Realizing a convex solidification interface via this adaptive bell-curve furnace profile is postulated to result in better crystallinity and higher yields than conventional CZT growth techniques.
NASA Technical Reports Server (NTRS)
Cullimore, B.
1994-01-01
SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
CPU timing routines for a CONVEX C220 computer system
NASA Technical Reports Server (NTRS)
Bynum, Mary Ann
1989-01-01
The timing routines available on the CONVEX C220 computer system in the Structural Mechanics Division (SMD) at NASA Langley Research Center are examined. The function of the timing routines, the use of the timing routines in sequential, parallel, and vector code, and the interpretation of the results from the timing routines with respect to the CONVEX model of computing are described. The timing routines available on the SMD CONVEX fall into two groups. The first group includes standard timing routines generally available with UNIX 4.3 BSD operating systems, while the second group includes routines unique to the SMD CONVEX. The standard timing routines described in this report are /bin/csh time,/bin/time, etime, and ctime. The routines unique to the SMD CONVEX are getinfo, second, cputime, toc, and a parallel profiling package made up of palprof, palinit, and palsum.
Park, Peter J; Bell, M A
2010-06-01
We tested the hypothesis that increased telencephalon size has evolved in threespine stickleback fish (Gasterosteus aculeatus) from structurally complex habitats using field-caught samples from one sea-run (ancestral) and 18 ecologically diverse freshwater (descendant) populations. Freshwater habitats ranged from shallow, structurally complex lakes with benthic-foraging stickleback (benthics), to deeper, structurally simple lakes in which stickleback depend more heavily on plankton for prey (generalists). Contrary to our expectations, benthics had smaller telencephala than generalists, but the shape of the telencephalon of the sea-run and benthic populations were more convex laterally. Convex telencephalon shape may indicate enlargement of the dorsolateral region, which is homologous with the tetrapod hippocampus. Telencephalon morphology is also sexually dimorphic, with larger, less convex telencephala in males. Freshwater stickleback from structurally complex habitats have retained the ancestral telencephalon morphology, but populations that feed more in open habitats on plankton have evolved larger, laterally concave telencephala.
Investigations into the shape-preserving interpolants using symbolic computation
NASA Technical Reports Server (NTRS)
Lam, Maria
1988-01-01
Shape representation is a central issue in computer graphics and computer-aided geometric design. Many physical phenomena involve curves and surfaces that are monotone (in some directions) or are convex. The corresponding representation problem is given some monotone or convex data, and a monotone or convex interpolant is found. Standard interpolants need not be monotone or convex even though they may match monotone or convex data. Most of the methods of investigation of this problem involve the utilization of quadratic splines or Hermite polynomials. In this investigation, a similar approach is adopted. These methods require derivative information at the given data points. The key to the problem is the selection of the derivative values to be assigned to the given data points. Schemes for choosing derivatives were examined. Along the way, fitting given data points by a conic section has also been investigated as part of the effort to study shape-preserving quadratic splines.
Computing quantum discord is NP-complete
NASA Astrophysics Data System (ADS)
Huang, Yichen
2014-03-01
We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.
NASA Technical Reports Server (NTRS)
Olariu, S.; Schwing, J.; Zhang, J.
1991-01-01
A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.
Thermal Protection System with Staggered Joints
NASA Technical Reports Server (NTRS)
Simon, Xavier D. (Inventor); Robinson, Michael J. (Inventor); Andrews, Thomas L. (Inventor)
2014-01-01
The thermal protection system disclosed herein is suitable for use with a spacecraft such as a reentry module or vehicle, where the spacecraft has a convex surface to be protected. An embodiment of the thermal protection system includes a plurality of heat resistant panels, each having an outer surface configured for exposure to atmosphere, an inner surface opposite the outer surface and configured for attachment to the convex surface of the spacecraft, and a joint edge defined between the outer surface and the inner surface. The joint edges of adjacent ones of the heat resistant panels are configured to mate with each other to form staggered joints that run between the peak of the convex surface and the base section of the convex surface.
A new convexity measure for polygons.
Zunic, Jovisa; Rosin, Paul L
2004-07-01
Abstract-Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is 1 if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.
Computing convex quadrangulations☆
Schiffer, T.; Aurenhammer, F.; Demuth, M.
2012-01-01
We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set. PMID:22389540
Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan
2010-01-01
We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556
Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan
2008-07-03
We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.
SNS programming environment user's guide
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.
1992-01-01
The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.
LARCRIM user's guide, version 1.0
NASA Technical Reports Server (NTRS)
Davis, John S.; Heaphy, William J.
1993-01-01
LARCRIM is a relational database management system (RDBMS) which performs the conventional duties of an RDBMS with the added feature that it can store attributes which consist of arrays or matrices. This makes it particularly valuable for scientific data management. It is accessible as a stand-alone system and through an application program interface. The stand-alone system may be executed in two modes: menu or command. The menu mode prompts the user for the input required to create, update, and/or query the database. The command mode requires the direct input of LARCRIM commands. Although LARCRIM is an update of an old database family, its performance on modern computers is quite satisfactory. LARCRIM is written in FORTRAN 77 and runs under the UNIX operating system. Versions have been released for the following computers: SUN (3 & 4), Convex, IRIS, Hewlett-Packard, CRAY 2 & Y-MP.
A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.
Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo
2018-04-01
Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried
2017-02-01
We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.
Evaluating convex roof entanglement measures.
Tóth, Géza; Moroder, Tobias; Gühne, Otfried
2015-04-24
We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
A search asymmetry reversed by figure-ground assignment.
Humphreys, G W; Müller, H
2000-05-01
We report evidence demonstrating that a search asymmetry favoring concave over convex targets can be reversed by altering the figure-ground assignment of edges in shapes. Visual search for a concave target among convex distractors is faster than search for a convex target among concave distractors (a search asymmetry). By using shapes with ambiguous local figure-ground relations, we demonstrated that search can be efficient (with search slopes around 10 ms/item) or inefficient (with search slopes around 30-40 ms/item) with the same stimuli, depending on whether edges are assigned to concave or convex "figures." This assignment process can operate in a top-down manner, according to the task set. The results suggest that attention is allocated to spatial regions following the computation of figure-ground relations in parallel across the elements present. This computation can also be modulated by top-down processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engberg, L; KTH Royal Institute of Technology, Stockholm; Eriksson, K
Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitlymore » balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives in aspects of accuracy and plan quality.« less
Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization
NASA Astrophysics Data System (ADS)
Kolosnitsyn, A. V.
2018-02-01
The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.
Image deblurring based on nonlocal regularization with a non-convex sparsity constraint
NASA Astrophysics Data System (ADS)
Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi
2018-04-01
In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.
Minimal norm constrained interpolation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Irvine, L. D.
1985-01-01
In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
High-frequency electromagnetic scarring in three-dimensional axisymmetric convex cavities
Warne, Larry K.; Jorgenson, Roy E.
2016-04-13
Here, this article examines the localization of high-frequency electromagnetic fields in three-dimensional axisymmetric cavities along periodic paths between opposing sides of the cavity. When these orbits lead to unstable localized modes, they are known as scars. This article treats the case where the opposing sides, or mirrors, are convex. Particular attention is focused on the normalization through the electromagnetic energy theorem. Both projections of the field along the scarred orbit as well as field point statistics are examined. Statistical comparisons are made with a numerical calculation of the scars run with an axisymmetric simulation.
NASA Astrophysics Data System (ADS)
Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj
2018-02-01
N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.
ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM
NASA Technical Reports Server (NTRS)
Hibbard, E. A.
1994-01-01
Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver); and an SGI IRIS 4D running IRIX (no native device driver). Currently with version 7.0 of ARCGRAPH, the VDI library supports the following output devices: A VT100 terminal with a RETRO-GRAPHICS board installed, a VT240 using the Tektronix 4010 emulation capability, an SGI IRIS turbo using the native GL2 library, a Tektronix 4010, a Tektronix 4105, and the Tektronix 4014. ARCGRAPH version 7.0 was developed in 1988.
Computationally efficient stochastic optimization using multiple realizations
NASA Astrophysics Data System (ADS)
Bayer, P.; Bürger, C. M.; Finkel, M.
2008-02-01
The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
Preconditioning 2D Integer Data for Fast Convex Hull Computations
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Allometric relationships between traveltime channel networks, convex hulls, and convexity measures
NASA Astrophysics Data System (ADS)
Tay, Lea Tien; Sagar, B. S. Daya; Chuah, Hean Teik
2006-06-01
The channel network (S) is a nonconvex set, while its basin [C(S)] is convex. We remove open-end points of the channel connectivity network iteratively to generate a traveltime sequence of networks (Sn). The convex hulls of these traveltime networks provide an interesting topological quantity, which has not been noted thus far. We compute lengths of shrinking traveltime networks L(Sn) and areas of corresponding convex hulls C(Sn), the ratios of which provide convexity measures CM(Sn) of traveltime networks. A statistically significant scaling relationship is found for a model network in the form L(Sn) ˜ A[C(Sn)]0.57. From the plots of the lengths of these traveltime networks and the areas of their corresponding convex hulls as functions of convexity measures, new power law relations are derived. Such relations for a model network are CM(Sn) ˜ ? and CM(Sn) ˜ ?. In addition to the model study, these relations for networks derived from seven subbasins of Cameron Highlands region of Peninsular Malaysia are provided. Further studies are needed on a large number of channel networks of distinct sizes and topologies to understand the relationships of these new exponents with other scaling exponents that define the scaling structure of river networks.
Computed tomography of the azygo-oesophageal recess. Normal appearances.
Lund, G; Lien, H H
1982-01-01
Computed tomography of the azygo--oesophageal recess was performed in 85 normal subjects. The recess was convex towards the left or had an approximately straight left wall. Convexity towards the right did not occur. Localized bulges caused by the azygos vein, oesophagus and aorta were frequent. The recess became gradually deeper caudally in patients below 50 years of age. Above that age a marked posterior extension of the heart and a prevertebral position of the aorta often caused a localized shallowing at the level of the inferior pulmonary veins or the ventricles.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
Modified surface testing method for large convex aspheric surfaces based on diffraction optics.
Zhang, Haidong; Wang, Xiaokun; Xue, Donglin; Zhang, Xuejun
2017-12-01
Large convex aspheric optical elements have been widely applied in advanced optical systems, which have presented a challenging metrology problem. Conventional testing methods cannot satisfy the demand gradually with the change of definition of "large." A modified method is proposed in this paper, which utilizes a relatively small computer-generated hologram and an illumination lens with certain feasibility to measure the large convex aspherics. Two example systems are designed to demonstrate the applicability, and also, the sensitivity of this configuration is analyzed, which proves the accuracy of the configuration can be better than 6 nm with careful alignment and calibration of the illumination lens in advance. Design examples and analysis show that this configuration is applicable to measure the large convex aspheric surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghomi, Pooyan Shirvani; Zinchenko, Yuriy
2014-08-15
Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less
Convex relaxations for gas expansion planning
Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...
2016-01-01
Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-01-01
The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474
NASA Astrophysics Data System (ADS)
Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.
2016-12-01
Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku
2011-03-01
The purpose of this paper is to present a new method to detect ulcers, which is one of the symptoms of Crohn's disease, from CT images. Crohn's disease is an inflammatory disease of the digestive tract. Crohn's disease commonly affects the small intestine. An optical or a capsule endoscope is used for small intestine examinations. However, these endoscopes cannot pass through intestinal stenosis parts in some cases. A CT image based diagnosis allows a physician to observe whole intestine even if intestinal stenosis exists. However, because of the complicated shape of the small and large intestines, understanding of shapes of the intestines and lesion positions are difficult in the CT image based diagnosis. Computer-aided diagnosis system for Crohn's disease having automated lesion detection is required for efficient diagnosis. We propose an automated method to detect ulcers from CT images. Longitudinal ulcers make rough surface of the small and large intestinal wall. The rough surface consists of combination of convex and concave parts on the intestinal wall. We detect convex and concave parts on the intestinal wall by a blob and an inverse-blob structure enhancement filters. A lot of convex and concave parts concentrate on roughed parts. We introduce a roughness value to differentiate convex and concave parts concentrated on the roughed parts from the other on the intestinal wall. The roughness value effectively reduces false positives of ulcer detection. Experimental results showed that the proposed method can detect convex and concave parts on the ulcers.
Certification of computational results
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault detection and fault tolerance in hardware and software systems is described. When used for software fault detection, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are compared and if they agree the results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance is formalized and realizations of it are illustrated by considering algorithms for the following problems: convex hull, sorting, and shortest path. Cases in which the second phase can be run concurrently with the first and act as a monitor are discussed. The certification trail approach are compared to other approaches to fault tolerance.
The role of convexity in perceptual completion: beyond good continuation.
Liu, Z; Jacobs, D W; Basri, R
1999-01-01
Since the seminal work of the Gestalt psychologists, there has been great interest in understanding what factors determine the perceptual organization of images. While the Gestaltists demonstrated the significance of grouping cues such as similarity, proximity and good continuation, it has not been well understood whether their catalog of grouping cues is complete--in part due to the paucity of effective methodologies for examining the significance of various grouping cues. We describe a novel, objective method to study perceptual grouping of planar regions separated by an occluder. We demonstrate that the stronger the grouping between two such regions, the harder it will be to resolve their relative stereoscopic depth. We use this new method to call into question many existing theories of perceptual completion (Ullman, S. (1976). Biological Cybernetics, 25, 1-6; Shashua, A., & Ullman, S. (1988). 2nd International Conference on Computer Vision (pp. 321-327); Parent, P., & Zucker, S. (1989). IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 823-839; Kellman, P. J., & Shipley, T. F. (1991). Cognitive psychology, Liveright, New York; Heitger, R., & von der Heydt, R. (1993). A computational model of neural contour processing, figure-ground segregation and illusory contours. In Internal Conference Computer Vision (pp. 32-40); Mumford, D. (1994). Algebraic geometry and its applications, Springer, New York; Williams, L. R., & Jacobs, D. W. (1997). Neural Computation, 9, 837-858) that are based on Gestalt grouping cues by demonstrating that convexity plays a strong role in perceptual completion. In some cases convexity dominates the effects of the well known Gestalt cue of good continuation. While convexity has been known to play a role in figure/ground segmentation (Rubin, 1927; Kanizsa & Gerbino, 1976), this is the first demonstration of its importance in perceptual completion.
NASA Astrophysics Data System (ADS)
Rizzatti, Eduardo O.; Barbosa, Marco Aurélio A.; Barbosa, Marcia C.
2018-02-01
The pressure versus temperature phase diagram of a system of particles interacting through a multiscale shoulder-like potential is exactly computed in one dimension. The N-shoulder potential exhibits N density anomaly regions in the phase diagram if the length scales can be connected by a convex curve. The result is analyzed in terms of the convexity of the Gibbs free energy.
Clearance detector and method for motion and distance
Xavier, Patrick G [Albuquerque, NM
2011-08-09
A method for correct and efficient detection of clearances between three-dimensional bodies in computer-based simulations, where one or both of the volumes is subject to translation and/or rotations. The method conservatively determines of the size of such clearances and whether there is a collision between the bodies. Given two bodies, each of which is undergoing separate motions, the method utilizes bounding-volume hierarchy representations for the two bodies and, mappings and inverse mappings for the motions of the two bodies. The method uses the representations, mappings and direction vectors to determine the directionally furthest locations of points on the convex hulls of the volumes virtually swept by the bodies and hence the clearance between the bodies, without having to calculate the convex hulls of the bodies. The method includes clearance detection for bodies comprising convex geometrical primitives and more specific techniques for bodies comprising convex polyhedra.
Cavity Versus Ligand Shape Descriptors: Application to Urokinase Binding Pockets.
Cerisier, Natacha; Regad, Leslie; Triki, Dhoha; Camproux, Anne-Claude; Petitjean, Michel
2017-11-01
We analyzed 78 binding pockets of the human urokinase plasminogen activator (uPA) catalytic domain extracted from a data set of crystallized uPA-ligand complexes. These binding pockets were computed with an original geometric method that does NOT involve any arbitrary parameter, such as cutoff distances, angles, and so on. We measured the deviation from convexity of each pocket shape with the pocket convexity index (PCI). We defined a new pocket descriptor called distributional sphericity coefficient (DISC), which indicates to which extent the protein atoms of a given pocket lie on the surface of a sphere. The DISC values were computed with the freeware PCI. The pocket descriptors and their high correspondences with ligand descriptors are crucial for polypharmacology prediction. We found that the protein heavy atoms lining the urokinases binding pockets are either located on the surface of their convex hull or lie close to this surface. We also found that the radii of the urokinases binding pockets and the radii of their ligands are highly correlated (r = 0.9).
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Yunbin, E-mail: zhaoyy@maths.bham.ac.u
2010-12-15
While the product of finitely many convex functions has been investigated in the field of global optimization, some fundamental issues such as the convexity condition and the Legendre-Fenchel transform for the product function remain unresolved. Focusing on quadratic forms, this paper is aimed at addressing the question: When is the product of finitely many positive definite quadratic forms convex, and what is the Legendre-Fenchel transform for it? First, we show that the convexity of the product is determined intrinsically by the condition number of so-called 'scaled matrices' associated with quadratic forms involved. The main result claims that if the conditionmore » number of these scaled matrices are bounded above by an explicit constant (which depends only on the number of quadratic forms involved), then the product function is convex. Second, we prove that the Legendre-Fenchel transform for the product of positive definite quadratic forms can be expressed, and the computation of the transform amounts to finding the solution to a system of equations (or equally, finding a Brouwer's fixed point of a mapping) with a special structure. Thus, a broader question than the open 'Question 11' in Hiriart-Urruty (SIAM Rev. 49, 225-273, 2007) is addressed in this paper.« less
Method and Apparatus for Powered Descent Guidance
NASA Technical Reports Server (NTRS)
Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)
2013-01-01
A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.
Statistical estimation via convex optimization for trending and performance monitoring
NASA Astrophysics Data System (ADS)
Samar, Sikandar
This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.
Khoo, Y; Singer, A; Cowburn, D
2017-07-01
We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.
ɛ-subgradient algorithms for bilevel convex optimization
NASA Astrophysics Data System (ADS)
Helou, Elias S.; Simões, Lucas E. A.
2017-05-01
This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.
Safe Onboard Guidance and Control Under Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars James
2011-01-01
An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models
Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...
2017-01-31
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less
Cavity Versus Ligand Shape Descriptors: Application to Urokinase Binding Pockets
Cerisier, Natacha; Regad, Leslie; Triki, Dhoha; Camproux, Anne-Claude
2017-01-01
Abstract We analyzed 78 binding pockets of the human urokinase plasminogen activator (uPA) catalytic domain extracted from a data set of crystallized uPA–ligand complexes. These binding pockets were computed with an original geometric method that does NOT involve any arbitrary parameter, such as cutoff distances, angles, and so on. We measured the deviation from convexity of each pocket shape with the pocket convexity index (PCI). We defined a new pocket descriptor called distributional sphericity coefficient (DISC), which indicates to which extent the protein atoms of a given pocket lie on the surface of a sphere. The DISC values were computed with the freeware PCI. The pocket descriptors and their high correspondences with ligand descriptors are crucial for polypharmacology prediction. We found that the protein heavy atoms lining the urokinases binding pockets are either located on the surface of their convex hull or lie close to this surface. We also found that the radii of the urokinases binding pockets and the radii of their ligands are highly correlated (r = 0.9). PMID:28570103
NASA Astrophysics Data System (ADS)
Shah, S. M.; Crawshaw, J. P.; Gray, F.; Yang, J.; Boek, E. S.
2017-06-01
In the last decade, the study of fluid flow in porous media has developed considerably due to the combination of X-ray Micro Computed Tomography (micro-CT) and advances in computational methods for solving complex fluid flow equations directly or indirectly on reconstructed three-dimensional pore space images. In this study, we calculate porosity and single phase permeability using micro-CT imaging and Lattice Boltzmann (LB) simulations for 8 different porous media: beadpacks (with bead sizes 50 μm and 350 μm), sandpacks (LV60 and HST95), sandstones (Berea, Clashach and Doddington) and a carbonate (Ketton). Combining the observed porosity and calculated single phase permeability, we shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging. Our study applies the concept of the 'Convex Hull' to calculate the REV by considering the two main macroscopic petrophysical parameters, porosity and single phase permeability, simultaneously. The shape of the hull can be used to identify strong correlation between the parameters or greatly differing convergence rates. To further enhance computational efficiency we note that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size so that only a few small simulations are needed to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
Radio Synthesis Imaging - A High Performance Computing and Communications Project
NASA Astrophysics Data System (ADS)
Crutcher, Richard M.
The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.
L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing
NASA Astrophysics Data System (ADS)
Demetriou, I. C.
2006-04-01
Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics, biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components that give nonnegative second divided differences (convexity) and one separate section of optimal components that give nonpositive second divided differences (concavity). The solution process finds the joint (that is the inflection point estimate of the underlying function) of the sections automatically. The underlying method is iterative, each iteration solving a structured strictly convex quadratic programming problem in order to obtain a convex or a concave section over a subrange of data. Restrictions on the complexity of the problem:Number of data, n, is not limited in the software package, but is limited to 2000 in the main driver. The total work of the method requires 2n-2 structured quadratic programming calculations over subranges of data, which in practice does not exceed the amount of O(n) computer operations. Typical running times:CPU time on a PC with an Intel 733 MHz processor operating in Windows 98: About 2 s to smooth n=1000 noisy measurements that follow the shape of the sine function over one period. Summary:L2CXCV is a package of Fortran 77 subroutines for least squares smoothing to n univariate data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is unknown. The piecewise linear interpolant to the smoothed values gives a convex/concave fit to the data. The underlying algorithm is based on the property that in this best convex/concave fit, the convex and the concave section are both optimal and separate. The algorithm is iterative, each iteration solving a strictly convex quadratic programming problem for the best convex fit to the first k data, starting from the best convex fit to the first k-1 data. By reversing the order and sign of the data, the algorithm obtains the best concave fit to the last n-k data. Then it chooses that k as the optimal position of the required sign change (which defines the inflection point of the fit), if the convex and the concave components to the first k and the last n-k data, respectively, form a convex/concave vector that gives the least sum of squares of residuals. In effect the algorithm requires at most 2n-2 quadratic programming calculations over subranges of data. The package employs a technique for quadratic programming, which takes advantage of a B-spline representation of the smoothed values and makes use of some efficient O(k) updating procedures, where k is the number of data of a subrange. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes that is about n, thus exhibiting quadratic performance in n. The Fortran codes have been designed to minimize the use of computing resources. Attention has been given to computer rounding errors details, which are essential to the robustness of the software package. Numerical examples with output are provided to help the use of the software and exhibit certain features of the method. Distribution material that includes driver programs, technical details of the installation of the package and test examples that demonstrate the use of the software is available in an ASCII file that accompanies this work.
NASA Technical Reports Server (NTRS)
Chan, Gordon C.; Turner, Horace Q.
1990-01-01
COSMIC/NASTRAN, as it is supported and maintained by COSMIC, runs on four main-frame computers - CDC, VAX, IBM and UNIVAC. COSMIC/NASTRAN on other computers, such as CRAY, AMDAHL, PRIME, CONVEX, etc., is available commercially from a number of third party organizations. All these computers, with their own one-of-a-kind operating systems, make NASTRAN machine dependent. The job control language (JCL), the file management, and the program execution procedure of these computers are vastly different, although 95 percent of NASTRAN source code was written in standard ANSI FORTRAN 77. The advantage of the UNIX operating system is that it has no machine boundary. UNIX is becoming widely used in many workstations, mini's, super-PC's, and even some main-frame computers. NASTRAN for the UNIX operating system is definitely the way to go in the future, and makes NASTRAN available to a host of computers, big and small. Since 1985, many NASTRAN improvements and enhancements were made to conform to the ANSI FORTRAN 77 standards. A major UNIX migration effort was incorporated into COSMIC NASTRAN 1990 release. As a pioneer work for the UNIX environment, a version of COSMIC 89 NASTRAN was officially released in October 1989 for DEC ULTRIX VAXstation 3100 (with VMS extensions). A COSMIC 90 NASTRAN version for DEC ULTRIX DECstation 3100 (with RISC) is planned for April 1990 release. Both workstations are UNIX based computers. The COSMIC 90 NASTRAN will be made available on a TK50 tape for the DEC ULTRIX workstations. Previously in 1988, an 88 NASTRAN version was tested successfully on a SiliconGraphics workstation.
Instability and sound emission from a flow over a curved surface
NASA Technical Reports Server (NTRS)
Maestrello, L.; Parikh, P.; Bayliss, A.
1988-01-01
The growth and decay of a wavepacket convecting in a boundary layer over a concave-convex surface is studied numerically using direct computations of the Navier-Stokes equations. The resulting sound radiation is computed using the linearized Euler equations with the pressure from the Navier-Stokes solution as a time-dependent boundary condition. It is shown that on the concave portion the amplitude of the wavepacket increases and its bandwidth broadens while on the convex portion some of the components in the packet are stabilized. The pressure field decays exponentially away from the surface and then algebraically exhibits a decay characteristic of acoustic waves in two dimensions. The far-field acoustic pressure exhibits a peak at a frequency corresponding to the inflow instability frequency.
An Exact, Compressible One-Dimensional Riemann Solver for General, Convex Equations of State
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, James Russell
2015-03-05
This note describes an algorithm with which to compute numerical solutions to the one- dimensional, Cartesian Riemann problem for compressible flow with general, convex equations of state. While high-level descriptions of this approach are to be found in the literature, this note contains most of the necessary details required to write software for this problem. This explanation corresponds to the approach used in the source code that evaluates solutions for the 1D, Cartesian Riemann problem with a JWL equation of state in the ExactPack package [16, 29]. Numerical examples are given with the proposed computational approach for a polytropic equationmore » of state and for the JWL equation of state.« less
Footstep Planning on Uneven Terrain with Mixed-Integer Convex Optimization
2014-08-01
ORGANIZATION NAME(S) AND ADDRESS(ES) Massachusetts Institute of Technology,Computer Science and Artificial Intellegence Laboratory,Cambridge,MA,02139...the MIT Energy Initiative, MIT CSAIL, and the DARPA Robotics Challenge. 1Robin Deits is with the Computer Science and Artificial Intelligence Laboratory
Physical-geometric optics method for large size faceted particles.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong
2017-10-02
A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.
Force user's manual: A portable, parallel FORTRAN
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.
1990-01-01
The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.
Numerical simulations of inductive-heated float-zone growth
NASA Technical Reports Server (NTRS)
Chan, Y. T.; Choi, S. K.
1992-01-01
The present work provides an improved fluid flow and heat-transfer modeling of float-zone growth by introducing a RF heating model so that an ad hoc heating temperature profile is not necessary. Numerical simulations were carried out to study the high-temperature float-zone growth of titanium carbide single crystal. The numerical results showed that the thermocapillary convection occurring inside the molten zone tends to increase the convexity of the melt-crystal interface and decrease the maximum temperature of the molten zone, while the natural convection tends to reduce the stability of the molten zone by increasing its height. It was found that the increase of induced heating due to the increase of applied RF voltage is reduced by the decrease of zone diameter. Surface tension plays an important role in controlling the amount of induced heating. Finally, a comparison of the computed shape of the free surface with a digital image obtained during a growth run showed adequate agreement.
On the complexity of a combined homotopy interior method for convex programming
NASA Astrophysics Data System (ADS)
Yu, Bo; Xu, Qing; Feng, Guochen
2007-03-01
In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.
A distributed approach to the OPF problem
NASA Astrophysics Data System (ADS)
Erseghe, Tomaso
2015-12-01
This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.
The Octree Encoding Method for Efficient Solid Modeling.
1982-08-01
vertex point of a test obel is interior or exterior to the object. If the object is a convex polyhedron , the surface is described by a L 55 yp Test Obel...values can be generated by adding an offset from a pre-computed table. For a convex polyhedron , if a point is in the positive . .half-space of all face...planes, then it is interior to the polyhedron . If it is in the negative half-plane of any face plane, then it is exterior to the object. Otherwise, the
A Walking Method for Non-Decomposition Intersection and Union of Arbitrary Polygons and Polyhedrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, M.; Yao, J.
We present a method for computing the intersection and union of non- convex polyhedrons without decomposition in O(n log n) time, where n is the total number of faces of both polyhedrons. We include an accompanying Python package which addresses many of the practical issues associated with implementation and serves as a proof of concept. The key to the method is that by considering the edges of the original ob- jects and the intersections between faces as walking routes, we can e ciently nd the boundary of the intersection of arbitrary objects using directional walks, thus handling the concave casemore » in a natural manner. The method also easily extends to plane slicing and non-convex polyhedron unions, and both the polyhedron and its constituent faces may be non-convex.« less
Asteroid models from the Lowell photometric database
NASA Astrophysics Data System (ADS)
Ďurech, J.; Hanuš, J.; Oszkiewicz, D.; Vančo, R.
2016-03-01
Context. Information about shapes and spin states of individual asteroids is important for the study of the whole asteroid population. For asteroids from the main belt, most of the shape models available now have been reconstructed from disk-integrated photometry by the lightcurve inversion method. Aims: We want to significantly enlarge the current sample (~350) of available asteroid models. Methods: We use the lightcurve inversion method to derive new shape models and spin states of asteroids from the sparse-in-time photometry compiled in the Lowell Photometric Database. To speed up the time-consuming process of scanning the period parameter space through the use of convex shape models, we use the distributed computing project Asteroids@home, running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. This way, the period-search interval is divided into hundreds of smaller intervals. These intervals are scanned separately by different volunteers and then joined together. We also use an alternative, faster, approach when searching the best-fit period by using a model of triaxial ellipsoid. By this, we can independently confirm periods found with convex models and also find rotation periods for some of those asteroids for which the convex-model approach gives too many solutions. Results: From the analysis of Lowell photometric data of the first 100 000 numbered asteroids, we derived 328 new models. This almost doubles the number of available models. We tested the reliability of our results by comparing models that were derived from purely Lowell data with those based on dense lightcurves, and we found that the rate of false-positive solutions is very low. We also present updated plots of the distribution of spin obliquities and pole ecliptic longitudes that confirm previous findings about a non-uniform distribution of spin axes. However, the models reconstructed from noisy sparse data are heavily biased towards more elongated bodies with high lightcurve amplitudes. Conclusions: The Lowell Photometric Database is a rich and reliable source of information about the spin states of asteroids. We expect hundreds of other asteroid models for asteroids with numbers larger than 100 000 to be derivable from this data set. More models will be able to be reconstructed when Lowell data are merged with other photometry. Tables 1 and 2 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A48
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Li, Qian-Yi; Zhong, Gui-Bin; Liu, Zu-de; Lao, Li-Feng
2017-08-01
To investigate the effect of asymmetric tension on idiopathic scoliosis (IS) and to understand its pathogenic mechanism. The rodent model of scoliosis was established using Sprague-Dawley rats with left rib-tethering from T 6 to T 12 , tail and shoulder amputation, and high-cage feeding. Vertebrae epiphyseal cartilage plates were harvested from the convex and concave sides. To analyze differences on the convex and concave sides, finite element analysis was carried out to determine the mechanical stress. Protein expression on epiphyseal cartilage was evaluated by western blot. Micro-CT was taken to evaluate the bone quality of vertebral on both sides. Scoliosis curves presented in X-ray radiographs of the rats. Finite element analysis was carried out on the axial and transverse tension of the spine. Stresses of the convex side were -170.14, -373.18, and -3832.32 MPa (X, Y, and Z axis, respectively), while the concave side showed stresses of 361.99, 605.55, and 3661.95 MPa. Collagen type II, collagen type X, Sox 9, RunX2, VEGF, and aggrecan were expressed significantly more on the convex side (P < 0.05). There was asymmetric expression of protein on the epiphyseal cartilage plate at molecular level. Compared with the convex side, the concave side had significantly lower value in the BV/TV and Tb.N, but higher value in the Tb.Sp (P < 0.05). There was asymmetry of bone quality in micro-architecture. In this study, asymmetric tension contributed to asymmetry in protein expression and bone quality on vertebral epiphyseal plates, ultimately resulting in asymmetry of anatomy. In addition, asymmetry of anatomy aggravated asymmetric tension. It is the first study to show that there is an asymmetrical vicious circle in IS. © 2017 Chinese Orthopaedic Association and John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.
Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.
Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron
2013-01-01
We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.
A trait-based test for habitat filtering: Convex hull volume
Cornwell, W.K.; Schwilk, D.W.; Ackerly, D.D.
2006-01-01
Community assembly theory suggests that two processes affect the distribution of trait values within communities: competition and habitat filtering. Within a local community, competition leads to ecological differentiation of coexisting species, while habitat filtering reduces the spread of trait values, reflecting shared ecological tolerances. Many statistical tests for the effects of competition exist in the literature, but measures of habitat filtering are less well-developed. Here, we present convex hull volume, a construct from computational geometry, which provides an n-dimensional measure of the volume of trait space occupied by species in a community. Combined with ecological null models, this measure offers a useful test for habitat filtering. We use convex hull volume and a null model to analyze California woody-plant trait and community data. Our results show that observed plant communities occupy less trait space than expected from random assembly, a result consistent with habitat filtering. ?? 2006 by the Ecological Society of America.
Bergeest, Jan-Philip; Rohr, Karl
2012-10-01
In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.
Computation of convex bounds for present value functions with random payments
NASA Astrophysics Data System (ADS)
Ahcan, Ales; Darkiewicz, Grzegorz; Goovaerts, Marc; Hoedemakers, Tom
2006-02-01
In this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by developing upper and lower bounds in the convex-order sense for present value functions. Technically speaking, our methodology is an extension of the results of Dhaene et al. [Insur. Math. Econom. 31(1) (2002) 3-33, Insur. Math. Econom. 31(2) (2002) 133-161] to the case of scalar products of mutually independent random vectors.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna
2016-09-13
Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less
First-order convex feasibility algorithms for x-ray CT
Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan
2013-01-01
Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295
Asteroid shape and spin statistics from convex models
NASA Astrophysics Data System (ADS)
Torppa, J.; Hentunen, V.-P.; Pääkkönen, P.; Kehusmaa, P.; Muinonen, K.
2008-11-01
We introduce techniques for characterizing convex shape models of asteroids with a small number of parameters, and apply these techniques to a set of 87 models from convex inversion. We present three different approaches for determining the overall dimensions of an asteroid. With the first technique, we measured the dimensions of the shapes in the direction of the rotation axis and in the equatorial plane and with the two other techniques, we derived the best-fit ellipsoid. We also computed the inertia matrix of the model shape to test how well it represents the target asteroid, i.e., to find indications of possible non-convex features or albedo variegation, which the convex shape model cannot reproduce. We used shape models for 87 asteroids to perform statistical analyses and to study dependencies between shape and rotation period, size, and taxonomic type. We detected correlations, but more data are required, especially on small and large objects, as well as slow and fast rotators, to reach a more thorough understanding about the dependencies. Results show, e.g., that convex models of asteroids are not that far from ellipsoids in root-mean-square sense, even though clearly irregular features are present. We also present new spin and shape solutions for Asteroids (31) Euphrosyne, (54) Alexandra, (79) Eurynome, (93) Minerva, (130) Elektra, (376) Geometria, (471) Papagena, and (776) Berbericia. We used a so-called semi-statistical approach to obtain a set of possible spin state solutions. The number of solutions depends on the abundancy of the data, which for Eurynome, Elektra, and Geometria was extensive enough for determining an unambiguous spin and shape solution. Data of Euphrosyne, on the other hand, provided a wide distribution of possible spin solutions, whereas the rest of the targets have two or three possible solutions.
Modeling IrisCode and its variants as convex polyhedral cones and its security implications.
Kong, Adams Wai-Kin
2013-03-01
IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.
Photometric survey, modelling, and scaling of long-period and low-amplitude asteroids
NASA Astrophysics Data System (ADS)
Marciniak, A.; Bartczak, P.; Müller, T.; Sanabria, J. J.; Alí-Lagoa, V.; Antonini, P.; Behrend, R.; Bernasconi, L.; Bronikowska, M.; Butkiewicz-Bąk, M.; Cikota, A.; Crippa, R.; Ditteon, R.; Dudziński, G.; Duffard, R.; Dziadura, K.; Fauvaud, S.; Geier, S.; Hirsch, R.; Horbowicz, J.; Hren, M.; Jerosimic, L.; Kamiński, K.; Kankiewicz, P.; Konstanciak, I.; Korlevic, P.; Kosturkiewicz, E.; Kudak, V.; Manzini, F.; Morales, N.; Murawiecka, M.; Ogłoza, W.; Oszkiewicz, D.; Pilcher, F.; Polakis, T.; Poncy, R.; Santana-Ros, T.; Siwak, M.; Skiff, B.; Sobkowiak, K.; Stoss, R.; Żejmo, M.; Żukowski, K.
2018-02-01
Context. The available set of spin and shape modelled asteroids is strongly biased against slowly rotating targets and those with low lightcurve amplitudes. This is due to the observing selection effects. As a consequence, the current picture of asteroid spin axis distribution, rotation rates, radiometric properties, or aspects related to the object's internal structure might be affected too. Aims: To counteract these selection effects, we are running a photometric campaign of a large sample of main belt asteroids omitted in most previous studies. Using least chi-squared fitting we determined synodic rotation periods and verified previous determinations. When a dataset for a given target was sufficiently large and varied, we performed spin and shape modelling with two different methods to compare their performance. Methods: We used the convex inversion method and the non-convex SAGE algorithm, applied on the same datasets of dense lightcurves. Both methods search for the lowest deviations between observed and modelled lightcurves, though using different approaches. Unlike convex inversion, the SAGE method allows for the existence of valleys and indentations on the shapes based only on lightcurves. Results: We obtain detailed spin and shape models for the first five targets of our sample: (159) Aemilia, (227) Philosophia, (329) Svea, (478) Tergeste, and (487) Venetia. When compared to stellar occultation chords, our models obtained an absolute size scale and major topographic features of the shape models were also confirmed. When applied to thermophysical modelling (TPM), they provided a very good fit to the infrared data and allowed their size, albedo, and thermal inertia to be determined. Conclusions: Convex and non-convex shape models provide comparable fits to lightcurves. However, some non-convex models fit notably better to stellar occultation chords and to infrared data in sophisticated thermophysical modelling (TPM). In some cases TPM showed strong preference for one of the spin and shape solutions. Also, we confirmed that slowly rotating asteroids tend to have higher-than-average values of thermal inertia, which might be caused by properties of the surface layers underlying the skin depth. The photometric data is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A7
Dislocation dynamics in non-convex domains using finite elements with embedded discontinuities
NASA Astrophysics Data System (ADS)
Romero, Ignacio; Segurado, Javier; LLorca, Javier
2008-04-01
The standard strategy developed by Van der Giessen and Needleman (1995 Modelling Simul. Mater. Sci. Eng. 3 689) to simulate dislocation dynamics in two-dimensional finite domains was modified to account for the effect of dislocations leaving the crystal through a free surface in the case of arbitrary non-convex domains. The new approach incorporates the displacement jumps across the slip segments of the dislocations that have exited the crystal within the finite element analysis carried out to compute the image stresses on the dislocations due to the finite boundaries. This is done in a simple computationally efficient way by embedding the discontinuities in the finite element solution, a strategy often used in the numerical simulation of crack propagation in solids. Two academic examples are presented to validate and demonstrate the extended model and its implementation within a finite element program is detailed in the appendix.
Development of Analysis Tools for Certification of Flight Control Laws
2009-03-31
In Proc. Conf. on Decision and Control, pages 881-886, Bahamas, 2004. [7] G. Chesi, A. Garulli, A. Tesi , and A. Vicino. LMI-based computation of...Minneapolis, MN, 2006, pp. 117-122. [10] G. Chesi, A. Garulli, A. Tesi . and A. Vicino, "LMI-based computation of optimal quadratic Lyapunov functions...Convex Optimization. Cambridge Univ. Press. Chesi, G., A. Garulli, A. Tesi and A. Vicino (2005). LMI-based computation of optimal quadratic Lyapunov
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.
Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh
2017-06-01
In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzevich, M; Grove, O; Balagurunathan, Y
Purpose: To assess the reproducibility of quantitative structural features using images from the computed tomography thoracic FDA phantom database under different scanning conditions. Methods: Development of quantitative image features to describe lesion shape and size, beyond conventional RECIST measures, is an evolving area of research in need of benchmarking standards. Gavrielides et al. (2010) scanned a FDA-developed thoracic phantom with nodules of various Hounsfield units (HU) values, shapes and sizes close to vascular structures using several scanners and varying scanning conditions/parameters; these images are in the public domain. We tested six structural features, namely, Convexity, Perimeter, Major Axis, Minor Axis,more » Extent Mean and Eccentricity, to characterize lung nodules. Convexity measures lesion irregularity referenced to a convex surface. Previously, we showed it to have prognostic value in lung adenocarcinoma. The above metrics and RECIST measures were evaluated on three spiculated (8mm/-300HU, 12mm/+30HU and 15mm/+30HU) and two non-spiculated (8mm/+100HU and 10mm/+100HU) nodules (from layout 2) imaged at three different mAs values: 25, 100 and 200 mAs; on a Phillips scanner (16-slice Mx8000-IDT; 3mm slice thickness). The nodules were segmented semi-automatically using a commercial software tool; the same HU range was used for all nodules. Results: Analysis showed convexity having the lowest maximum coefficient of variation (MCV): 1.1% and 0.6% for spiculated and non-spiculated nodules, respectively, much lower compared to RECIST Major and Minor axes whose MCV were 10.1% and 13.4% for spiculated, and 1.9% and 2.3% for non-spiculated nodules, respectively, across the various mAs. MCVs were consistently larger for speculated nodules. In general, the dependence of structural features on mAs (noise) was low. Conclusion: The FDA phantom CT database may be used for benchmarking of structural features for various scanners and scanning conditions; we used only a small fraction of available data. Our feature convexity outperformed other structural features including RECIST measures.« less
Equivalent Relaxations of Optimal Power Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, S; Low, SH; Teeraratkul, T
2015-03-01
Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results implymore » that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation and the chordal relaxation are equally tight and both are strictly tighter than the second-order cone relaxation. Therefore, for mesh networks, one should either solve the chordal relaxation or the SOCP relaxation, trading off tightness and the required computational effort. Simulations are used to illustrate these results.« less
NASA Astrophysics Data System (ADS)
Wu, Xiaolin; Rong, Yue
2015-12-01
The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
Morphological decomposition of 2-D binary shapes into convex polygons: a heuristic algorithm.
Xu, J
2001-01-01
In many morphological shape decomposition algorithms, either a shape can only be decomposed into shape components of extremely simple forms or a time consuming search process is employed to determine a decomposition. In this paper, we present a morphological shape decomposition algorithm that decomposes a two-dimensional (2-D) binary shape into a collection of convex polygonal components. A single convex polygonal approximation for a given image is first identified. This first component is determined incrementally by selecting a sequence of basic shape primitives. These shape primitives are chosen based on shape information extracted from the given shape at different scale levels. Additional shape components are identified recursively from the difference image between the given image and the first component. Simple operations are used to repair certain concavities caused by the set difference operation. The resulting hierarchical structure provides descriptions for the given shape at different detail levels. The experiments show that the decomposition results produced by the algorithm seem to be in good agreement with the natural structures of the given shapes. The computational cost of the algorithm is significantly lower than that of an earlier search-based convex decomposition algorithm. Compared to nonconvex decomposition algorithms, our algorithm allows accurate approximations for the given shapes at low coding costs.
Direct single-layered fabrication of 3D concavo convex patterns in nano-stereolithography
NASA Astrophysics Data System (ADS)
Lim, T. W.; Park, S. H.; Yang, D. Y.; Kong, H. J.; Lee, K. S.
2006-09-01
A nano-surfacing process (NSP) is proposed to directly fabricate three-dimensional (3D) concavo convex-shaped microstructures such as micro-lens arrays using two-photon polymerization (TPP), a promising technique for fabricating arbitrary 3D highly functional micro-devices. In TPP, commonly utilized methods for fabricating complex 3D microstructures to date are based on a layer-by-layer accumulating technique employing two-dimensional sliced data derived from 3D computer-aided design data. As such, this approach requires much time and effort for precise fabrication. In this work, a novel single-layer exposure method is proposed in order to improve the fabricating efficiency for 3D concavo convex-shaped microstructures. In the NSP, 3D microstructures are divided into 13 sub-regions horizontally with consideration of the heights. Those sub-regions are then expressed as 13 characteristic colors, after which a multi-voxel matrix (MVM) is composed with the characteristic colors. Voxels with various heights and diameters are generated to construct 3D structures using a MVM scanning method. Some 3D concavo convex-shaped microstructures were fabricated to estimate the usefulness of the NSP, and the results show that it readily enables the fabrication of single-layered 3D microstructures.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen
1994-01-01
In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw growth in every area. Within 26 months, data under UniTree control grew from nil to over 12 terabytes, nearly all of it stored on robotically mounted tape. HiPPI/UltraNet was added to enhance connectivity, and later HiPPI/TCP was added as well. Disks and robotic tape silos were added to those already under UniTree's control, and 18-track tapes were upgraded to 36-track. The primary data source for UniTree, the facility's Cray Y-MP/4-128, first doubled its processing power and then was replaced altogether by a C98/6-256 with nearly two-and-a-half times the Y-MP's combined peak gigaflops. The Convex/UniTree software was upgraded from version 1.5 to 1.7.5, and then to 1.7.6. Finally, the server itself, a Convex C3240, was upgraded to a C3830 with a second I/O bay, doubling the C3240's memory and capacity for I/O. This paper describes insights gained and reinforced with the burgeoning demands on the UniTree storage system and the significant increases in performance gained from the many upgrades.
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
Method and system for diagnostics of apparatus
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2012-01-01
Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.
Jensen-Bregman LogDet Divergence for Efficient Similarity Computations on Positive Definite Tensors
2012-05-02
function of Legendre-type on int(domS) [29]. From (7) the following properties of dφ(x, y) are apparent: strict convexity in x; asym- metry; non ...tensor imaging. An important task in all of these applications is to compute the distance between covariance matrices using a (dis)similarity function ...important task in all of these applications is to compute the distance between covariance matrices using a (dis)similarity function , for which the natural
A Fourier dimensionality reduction model for big data interferometric imaging
NASA Astrophysics Data System (ADS)
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.
PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. The version 3.6b+ UNIX/DISSPLA implementations of PLOT3D (ARC-12788) and PLOT3D/TURB3D (ARC-12778) were developed for use on computers running UNIX SYSTEM 5 with BSD 4.3 extensions. The standard distribution media for each ofthese programs is a 9track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC-12782); (3) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. System 5 is a trademark of Bell Labs, Incorporated. BSD4.3 is a trademark of the University of California at Berkeley. UNIX is a registered trademark of AT&T.
PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The UNIX/DISSPLA implementation of PLOT3D supports 2-D polygons as well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. The version 3.6b+ UNIX/DISSPLA implementations of PLOT3D (ARC-12788) and PLOT3D/TURB3D (ARC-12778) were developed for use on computers running UNIX SYSTEM 5 with BSD 4.3 extensions. The standard distribution media for each ofthese programs is a 9track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC-12782); (3) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. System 5 is a trademark of Bell Labs, Incorporated. BSD4.3 is a trademark of the University of California at Berkeley. UNIX is a registered trademark of AT&T.
Nonconvex model predictive control for commercial refrigeration
NASA Astrophysics Data System (ADS)
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Graph Matching: Relax at Your Own Risk.
Lyzinski, Vince; Fishkind, Donniell E; Fiori, Marcelo; Vogelstein, Joshua T; Priebe, Carey E; Sapiro, Guillermo
2016-01-01
Graph matching-aligning a pair of graphs to minimize their edge disagreements-has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.
NASA Technical Reports Server (NTRS)
Kriegsmann, Gregory A.; Taflove, Allen; Umashankar, Koradar R.
1987-01-01
A new formulation of electromagnetic wave scattering by convex, two-dimensional conducting bodies is reported. This formulation, called the on-surface radiation condition (OSRC) approach, is based upon an expansion of the radiation condition applied directly on the surface of a scatterer. It is now shown that application of a suitable radiation condition directly on the surface of a convex conducting scatterer can lead to substantial simplification of the frequency-domain integral equation for the scattered field, which is reduced to just a line integral. For the transverse magnetic case, the integrand is known explicitly. For the transverse electric case, the integrand can be easily constructed by solving an ordinary differential equation around the scatterer surface contour. Examples are provided which show that OSRC yields computed near and far fields which approach the exact results for canonical shapes such as the circular cylinder, square cylinder, and strip. Electrical sizes for the examples are ka = 5 and ka = 10. The new OSRC formulation of scattering may present a useful alternative to present integral equation and uniform high-frequency approaches for convex cylinders larger than ka = 1. Structures with edges or corners can also be analyzed, although more work is needed to incorporate the physics of singular currents at these discontinuities. Convex dielectric structures can also be treated using OSRC.
Geometric approach to segmentation and protein localization in cell culture assays.
Raman, S; Maxwell, C A; Barcellos-Hoff, M H; Parvin, B
2007-01-01
Cell-based fluorescence imaging assays are heterogeneous and require the collection of a large number of images for detailed quantitative analysis. Complexities arise as a result of variation in spatial nonuniformity, shape, overlapping compartments and scale (size). A new technique and methodology has been developed and tested for delineating subcellular morphology and partitioning overlapping compartments at multiple scales. This system is packaged as an integrated software platform for quantifying images that are obtained through fluorescence microscopy. Proposed methods are model based, leveraging geometric shape properties of subcellular compartments and corresponding protein localization. From the morphological perspective, convexity constraint is imposed to delineate and partition nuclear compartments. From the protein localization perspective, radial symmetry is imposed to localize punctate protein events at submicron resolution. Convexity constraint is imposed against boundary information, which are extracted through a combination of zero-crossing and gradient operator. If the convexity constraint fails for the boundary then positive curvature maxima are localized along the contour and the entire blob is partitioned into disjointed convex objects representing individual nuclear compartment, by enforcing geometric constraints. Nuclear compartments provide the context for protein localization, which may be diffuse or punctate. Punctate signal are localized through iterative voting and radial symmetries for improved reliability and robustness. The technique has been tested against 196 images that were generated to study centrosome abnormalities. Corresponding computed representations are compared against manual counts for validation.
Convex geometry of quantum resource quantification
NASA Astrophysics Data System (ADS)
Regula, Bartosz
2018-01-01
We introduce a framework unifying the mathematical characterisation of different measures of general quantum resources and allowing for a systematic way to define a variety of faithful quantifiers for any given convex quantum resource theory. The approach allows us to describe many commonly used measures such as matrix norm-based quantifiers, robustness measures, convex roof-based measures, and witness-based quantifiers together in a common formalism based on the convex geometry of the underlying sets of resource-free states. We establish easily verifiable criteria for a measure to possess desirable properties such as faithfulness and strong monotonicity under relevant free operations, and show that many quantifiers obtained in this framework indeed satisfy them for any considered quantum resource. We derive various bounds and relations between the measures, generalising and providing significantly simplified proofs of results found in the resource theories of quantum entanglement and coherence. We also prove that the quantification of resources in this framework simplifies for pure states, allowing us to obtain more easily computable forms of the considered measures, and show that many of them are in fact equal on pure states. Further, we investigate the dual formulation of resource quantifiers, which provide a characterisation of the sets of resource witnesses. We present an explicit application of the results to the resource theories of multi-level coherence, entanglement of Schmidt number k, multipartite entanglement, as well as magic states, providing insight into the quantification of the four resources by establishing novel quantitative relations and introducing new quantifiers, such as a measure of entanglement of Schmidt number k which generalises the convex roof-extended negativity, a measure of k-coherence which generalises the \
Machine characterization and benchmark performance prediction
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.
1988-01-01
From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrotra, Sanjay
2016-09-07
The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting ourmore » main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.« less
How tight are beetle hugs? Attachment in mating leaf beetles
NASA Astrophysics Data System (ADS)
Voigt, Dagmar; Tsipenyuk, Alexey; Varenberg, Michael
2017-09-01
Similar to other leaf beetles, rosemary beetles Chrysolina americana exhibit a distinct sexual dimorphism in tarsal attachment setae. Setal discoid terminals occur only in males, and they have been previously associated with a long-term attachment to the female's back (elytra) during copulation and mate guarding. For the first time, we studied living males and females holding to female's elytra. Pull-off force measurements with a custom-made tribometer featuring a self-aligning sample holder confirmed stronger attachment to female elytra compared with glass in both males and females; corresponding to 45 and 30 times the body weight, respectively. In line with previous studies, males generated significantly higher forces than females on convex elytra and flat glass, 1.2 times and 6.8 times, respectively. Convex substrates like elytra seem to improve the attachment ability of rosemary beetles, because they can hold more strongly due to favourable shear angles of legs, tarsi and adhesive setae. A self-aligning sample holder is found to be suitable for running force measurement tests with living biological samples.
Real-Time Generation of the Footprints both on Floor and Ground
NASA Astrophysics Data System (ADS)
Hirano, Yousuke; Tanaka, Toshimitsu; Sagawa, Yuji
This paper presents a real-time method for generating various footprints in relation to state of walking. In addition, the method is expanded to cover both on hard floor and soft ground. Results of the previous method were not so realistic, because the method places same simple foot prints on the motion path. Our method runs filters on the original pattern of footprint on GPU. And then our method gradates intensity of the pattern to two directions, in order to create partially dark footprints. Here parameters of the filter and the gradation are changed by move speed and direction. The pattern is mapped on a polygon. If the walker is pigeon-toed or bandy-legged, the polygon is rotated inside or outside, respectively. Finally, it is placed on floor. Footprints on soft ground are concavity and convexity caused by walking. Thus an original pattern of footprints on ground is defined as a height map. The height map is modified using the filter and the gradation operation developed for floor footprints. The height map is converted to a bump map to fast display the concavity and convexity of footprints.
How tight are beetle hugs? Attachment in mating leaf beetles.
Voigt, Dagmar; Tsipenyuk, Alexey; Varenberg, Michael
2017-09-01
Similar to other leaf beetles, rosemary beetles Chrysolina americana exhibit a distinct sexual dimorphism in tarsal attachment setae. Setal discoid terminals occur only in males, and they have been previously associated with a long-term attachment to the female's back (elytra) during copulation and mate guarding. For the first time, we studied living males and females holding to female's elytra. Pull-off force measurements with a custom-made tribometer featuring a self-aligning sample holder confirmed stronger attachment to female elytra compared with glass in both males and females; corresponding to 45 and 30 times the body weight, respectively. In line with previous studies, males generated significantly higher forces than females on convex elytra and flat glass, 1.2 times and 6.8 times, respectively. Convex substrates like elytra seem to improve the attachment ability of rosemary beetles, because they can hold more strongly due to favourable shear angles of legs, tarsi and adhesive setae. A self-aligning sample holder is found to be suitable for running force measurement tests with living biological samples.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.
A vectorized Lanczos eigensolver for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1990-01-01
The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.
Eom, Ki Seong; Kim, Tae Young; Park, Jong Tae
2009-04-01
We report the case of a 78-year-old man with chronic subdural haematoma (CSDH) who presented with impairment in recent memory and gait disturbance. He underwent burr-hole craniostomy with a closed-drainage system. A computed tomography scan conducted on postoperative day 3 demonstrated an acute epidural haematoma over the contralateral frontoparietal convexity. Craniotomy and haematoma evacuation were immediately performed. The haematoma was located between the outer and inner dura mater that each comprise a single layer. To our knowledge, this is the first reported case of an acute haematoma located between the separated dura mater that occurred following drainage of a contralateral CSDH, and it is the second reported case of interdural haematoma over the cerebral convexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler
The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less
SLOPE—ADAPTIVE VARIABLE SELECTION VIA CONVEX OPTIMIZATION
Bogdan, Małgorzata; van den Berg, Ewout; Sabatti, Chiara; Su, Weijie; Candès, Emmanuel J.
2015-01-01
We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ + z, where X has dimensions n × p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈ℝp12‖y−Xb‖ℓ22+λ1|b|(1)+λ2|b|(2)+⋯+λp|b|(p),where λ1 ≥ λ2 ≥ … ≥ λp ≥ 0 and |b|(1)≥|b|(2)≥⋯≥|b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—that is, stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300] procedure (BH) which compares more significant p-values with more stringent thresholds. One notable choice of the sequence {λi} is given by the BH critical values λBH(i)=z(1−i⋅q/2p), where q ∈ (0, 1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data. PMID:26709357
Universal portfolios generated by the Bregman divergence
NASA Astrophysics Data System (ADS)
Tan, Choon Peng; Kuang, Kee Seng
2017-04-01
The Bregman divergence of two probability vectors is a stronger form of the f-divergence introduced by Csiszar. Two versions of the Bregman universal portfolio are presented by exploiting the mean-value theorem. The explicit form of the Bregman universal portfolio generated by a function of a convex polynomial is derived and studied empirically. This portfolio can be regarded as another generalized of the well-known Helmbold portfolio. By running the portfolios on selected stock-price data sets from the local stock exchange, it is shown that it is possible to increase the wealth of the investor by using the portfolios in investment.
Nozoe, Masafumi; Mase, Kyoshi; Murakami, Shigefumi; Okada, Makoto; Ogino, Tomoyuki; Matsushita, Kazuhiro; Takashima, Sachie; Yamamoto, Noriyasu; Fukuda, Yoshihiro; Domen, Kazuhisa
2013-10-01
Assessment of the degree of air-flow obstruction is important for determining the treatment strategy in COPD patients. However, in some elderly COPD patients, measuring FVC is impossible because of cognitive dysfunction or severe dyspnea. In such patients a simple test of airways obstruction requiring only a short run of tidal breathing would be useful. We studied whether the spontaneous expiratory flow-volume (SEFV) curve pattern reflects the degree of air-flow obstruction in elderly COPD patients. In 34 elderly subjects (mean ± SD age 80 ± 7 y) with stable COPD (percent-of-predicted FEV(1) 39.0 ± 18.5%), and 12 age-matched healthy subjects, we measured FVC and recorded flow-volume curves during quiet breathing. We studied the SEFV curve patterns (concavity/convexity), spirometry results, breathing patterns, and demographics. The SEFV curve concavity/convexity prediction accuracy was examined by calculating the receiver operating characteristic curves, cutoff values, area under the curve, sensitivity, and specificity. Fourteen subjects with COPD had a concave SEFV curve. All the healthy subjects had convex SEFV curves. The COPD subjects who had concave SEFV curves often had very severe airway obstruction. The percent-of-predicted FEV(1)% (32.4%) was the most powerful SEFV curve concavity predictor (area under the curve 0.92, 95% CI 0.83-1.00), and had the highest sensitivity (0.93) and specificity (0.88). Concavity of the SEFV curve obtained during tidal breathing may be a useful test for determining the presence of very severe obstruction in elderly patients unable to perform a satisfactory FVC maneuver.
An exact general remeshing scheme applied to physically conservative voxelization
Powell, Devon; Abel, Tom
2015-05-21
We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themore » corresponding integral over the output mesh. We refer to this as “physically conservative voxelization.” At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.« less
On Finding Shortest Paths on Convex Polyhedra.
1985-05-01
versi ty of N laryhrid ml Collge :IJR-. M T) 207-12 COMPUTER SCIENCE TECHNICAL REPR SERWS .UE TE UNIVERSITY OF MARYLAND COLLEGE PARK, MARYLAND S 20742...planar layout can be physically interpreted as cutting the polyhedron along the ridges and unfolding the resulting object onto the plane. o% 4e.. ~16 A o
Direct numerical simulation of curved turbulent channel flow
NASA Technical Reports Server (NTRS)
Moser, R. D.; Moin, P.
1984-01-01
Low Reynolds number, mildly curved, turbulent channel flow has been simulated numerically without subgrid scale models. A new spectral numerical method developed for this problem was used, and the computations were performed with 2 million degrees of freedom. A variety of statistical and structural information has been extracted from the computed flow fields. These include mean velocity, turbulence stresses, velocity skewness, and flatness factors, space time correlations and spectra, all the terms in the Reynolds stress balance equations, and contour and vector plots of instantaneous velocity fields. The effects of curvature on this flow were determined by comparing the concave and convex sides of the channel. The observed effects are consistent with experimental observations for mild curvature. The most significant difference in the turbulence statistics between the concave and convex sides was in the Reynolds shear stress. This was accompanied by significant differences in the terms of the Reynolds shear stress balance equations. In addition, it was found that stationary Taylor-Gortler vortices were present and that they had a significant effect on the flow by contributing to the mean Reynolds shear stress, and by affecting the underlying turbulence.
ERIC Educational Resources Information Center
Scott, Paul
2006-01-01
A "convex" polygon is one with no re-entrant angles. Alternatively one can use the standard convexity definition, asserting that for any two points of the convex polygon, the line segment joining them is contained completely within the polygon. In this article, the author provides a solution to a problem involving convex lattice polygons.
Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler
2016-09-01
This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less
Reduced rank regression via adaptive nuclear norm penalization
Chen, Kun; Dong, Hongbo; Chan, Kung-Sik
2014-01-01
Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172
Energy optimization in mobile sensor networks
NASA Astrophysics Data System (ADS)
Yu, Shengwei
Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.
Pashaei, Ali; Bayer, Jason; Meillet, Valentin; Dubois, Rémi; Vigmond, Edward
2015-03-01
To show how atrial fibrillation rotor activity on the heart surface manifests as phase on the torso, fibrillation was induced on a geometrically accurate computer model of the human atria. The Hilbert transform, time embedding, and filament detection were compared. Electrical activity on the epicardium was used to compute potentials on different surfaces from the atria to the torso. The Hilbert transform produces erroneous phase when pacing for longer than the action potential duration. The number of phase singularities, frequency content, and the dominant frequency decreased with distance from the heart, except for the convex hull. Copyright © 2015 Elsevier Inc. All rights reserved.
Baxter, John S. H.; Inoue, Jiro; Drangova, Maria; Peters, Terry M.
2016-01-01
Abstract. Optimization-based segmentation approaches deriving from discrete graph-cuts and continuous max-flow have become increasingly nuanced, allowing for topological and geometric constraints on the resulting segmentation while retaining global optimality. However, these two considerations, topological and geometric, have yet to be combined in a unified manner. The concept of “shape complexes,” which combine geodesic star convexity with extendable continuous max-flow solvers, is presented. These shape complexes allow more complicated shapes to be created through the use of multiple labels and super-labels, with geodesic star convexity governed by a topological ordering. These problems can be optimized using extendable continuous max-flow solvers. Previous approaches required computationally expensive coordinate system warping, which are ill-defined and ambiguous in the general case. These shape complexes are demonstrated in a set of synthetic images as well as vessel segmentation in ultrasound, valve segmentation in ultrasound, and atrial wall segmentation from contrast-enhanced CT. Shape complexes represent an extendable tool alongside other continuous max-flow methods that may be suitable for a wide range of medical image segmentation problems. PMID:28018937
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
A theoretical stochastic control framework for adapting radiotherapy to hypoxia
NASA Astrophysics Data System (ADS)
Saberian, Fatemeh; Ghate, Archis; Kim, Minsun
2016-10-01
Hypoxia, that is, insufficient oxygen partial pressure, is a known cause of reduced radiosensitivity in solid tumors, and especially in head-and-neck tumors. It is thus believed to adversely affect the outcome of fractionated radiotherapy. Oxygen partial pressure varies spatially and temporally over the treatment course and exhibits inter-patient and intra-tumor variation. Emerging advances in non-invasive functional imaging offer the future possibility of adapting radiotherapy plans to this uncertain spatiotemporal evolution of hypoxia over the treatment course. We study the potential benefits of such adaptive planning via a theoretical stochastic control framework using computer-simulated evolution of hypoxia on computer-generated test cases in head-and-neck cancer. The exact solution of the resulting control problem is computationally intractable. We develop an approximation algorithm, called certainty equivalent control, that calls for the solution of a sequence of convex programs over the treatment course; dose-volume constraints are handled using a simple constraint generation method. These convex programs are solved using an interior point algorithm with a logarithmic barrier via Newton’s method and backtracking line search. Convexity of various formulations in this paper is guaranteed by a sufficient condition on radiobiological tumor-response parameters. This condition is expected to hold for head-and-neck tumors and for other similarly responding tumors where the linear dose-response parameter is larger than the quadratic dose-response parameter. We perform numerical experiments on four test cases by using a first-order vector autoregressive process with exponential and rational-quadratic covariance functions from the spatiotemporal statistics literature to simulate the evolution of hypoxia. Our results suggest that dynamic planning could lead to a considerable improvement in the number of tumor cells remaining at the end of the treatment course. Through these simulations, we also gain insights into when and why dynamic planning is likely to yield the largest benefits.
Dimensionality Reduction in Big Data with Nonnegative Matrix Factorization
2017-06-20
appli- cations of data mining, signal processing , computer vision, bioinformatics, etc. Fun- damentally, NMF has two main purposes. First, it reduces...shape of the function becomes more spherical because ∂ 2g ∂y2i = 1, ∀i, and g(y) is convex. This part aims to make the post- processing parts more...maxStop = 0 for each thread of computation */; 3 /*Re-scaling variables*/; 4 Q = H√ diag(H)diag(H)T ; q = h√ diag(H) ; 5 /*Solving NQP: minimizingf(x
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED
Scattering from thin dielectric straps surrounding a perfectly conducting structure
NASA Technical Reports Server (NTRS)
Al-Hekail, Zeyad; Gupta, Inder J.
1989-01-01
A method to calculate the electromagnetic scattered fields from a dielectric strap wrapped around convex, conducting structure is presented. A moment method technique is used to find the current excited within the strap by the incident plane wave. Then, Uniform Geometrical Theory of Diffraction (UTD) is used to compute the fields scattered by the strap. Reasonable agreement was obtained between the computed and the measured results. The results found in this study are useful in evaluating straps as a target support structure for scattering measurements.
NASA Technical Reports Server (NTRS)
Salmon, Ellen
1996-01-01
The data storage and retrieval demands of space and Earth sciences researchers have made the NASA Center for Computational Sciences (NCCS) Mass Data Storage and Delivery System (MDSDS) one of the world's most active Convex UniTree systems. Science researchers formed the NCCS's Computer Environments and Research Requirements Committee (CERRC) to relate their projected supercomputing and mass storage requirements through the year 2000. Using the CERRC guidelines and observations of current usage, some detailed projections of requirements for MDSDS network bandwidth and mass storage capacity and performance are presented.
A study of workstation computational performance for real-time flight simulation
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Cleveland, Jeff I., II
1995-01-01
With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.
Another Program For Generating Interactive Graphics
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S
Bertamini, Marco; Wagemans, Johan
2013-04-01
Interest in convexity has a long history in vision science. For smooth contours in an image, it is possible to code regions of positive (convex) and negative (concave) curvature, and this provides useful information about solid shape. We review a large body of evidence on the role of this information in perception of shape and in attention. This includes evidence from behavioral, neurophysiological, imaging, and developmental studies. A review is necessary to analyze the evidence on how convexity affects (1) separation between figure and ground, (2) part structure, and (3) attention allocation. Despite some broad agreement on the importance of convexity in these areas, there is a lack of consensus on the interpretation of specific claims--for example, on the contribution of convexity to metric depth and on the automatic directing of attention to convexities or to concavities. The focus is on convexity and concavity along a 2-D contour, not convexity and concavity in 3-D, but the important link between the two is discussed. We conclude that there is good evidence for the role of convexity information in figure-ground organization and in parsing, but other, more specific claims are not (yet) well supported.
Mathematical analysis on the cosets of subgroup in the group of E-convex sets
NASA Astrophysics Data System (ADS)
Abbas, Nada Mohammed; Ajeena, Ruma Kareem K.
2018-05-01
In this work, analyzing the cosets of the subgroup in the group of L – convex sets is presented as a new and powerful tool in the topics of the convex analysis and abstract algebra. On L – convex sets, the properties of these cosets are proved mathematically. Most important theorem on a finite group of L – convex sets theory which is the Lagrange’s Theorem has been proved. As well as, the mathematical proof of the quotient group of L – convex sets is presented.
Shirota, Go; Gonoi, Wataru; Ishida, Masanori; Okuma, Hidemi; Shintani, Yukako; Abe, Hiroyuki; Takazawa, Yutaka; Ikemura, Masako; Fukayama, Masashi; Ohtomo, Kuni
2015-01-01
The purpose of this study was to evaluate the brain by postmortem computed tomography (PMCT) versus antemortem computed tomography (AMCT) using brains from the same patients. We studied 36 nontraumatic subjects who underwent AMCT, PMCT, and pathological autopsy in our hospital between April 2009 and December 2013. PMCT was performed within 20 h after death, followed by pathological autopsy including the brain. Autopsy confirmed the absence of intracranial disorders that might be related to the cause of death or might affect measurements in our study. Width of the third ventricle, width of the central sulcus, and attenuation in gray matter (GM) and white matter (WM) from the same area of the basal ganglia, centrum semiovale, and high convexity were statistically compared between AMCT and PMCT. Both the width of the third ventricle and the central sulcus were significantly shorter in PMCT than in AMCT (P < 0.0001). GM attenuation increased after death at the level of the centrum semiovale and high convexity, but the differences were not statistically significant considering the differences in attenuation among the different computed tomography scanners. WM attenuation significantly increased after death at all levels (P<0.0001). The differences were larger than the differences in scanners. GM/WM ratio of attenuation was significantly lower by PMCT than by AMCT at all levels (P<0.0001). PMCT showed an increase in WM attenuation, loss of GM-WM differentiation, and brain swelling, evidenced by a decrease in the size of ventricles and sulci.
Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler
2016-12-01
The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less
Chromatically corrected virtual image visual display. [reducing eye strain in flight simulators
NASA Technical Reports Server (NTRS)
Kahlbaum, W. M., Jr. (Inventor)
1980-01-01
An in-line, three element, large diameter, optical display lens is disclosed which has a front convex-convex element, a central convex-concave element, and a rear convex-convex element. The lens, used in flight simulators, magnifies an image presented on a television monitor and, by causing light rays leaving the lens to be in essentially parallel paths, reduces eye strain of the simulator operator.
Nash points, Ky Fan inequality and equilibria of abstract economies in Max-Plus and -convexity
NASA Astrophysics Data System (ADS)
Briec, Walter; Horvath, Charles
2008-05-01
-convexity was introduced in [W. Briec, C. Horvath, -convexity, Optimization 53 (2004) 103-127]. Separation and Hahn-Banach like theorems can be found in [G. Adilov, A.M. Rubinov, -convex sets and functions, Numer. Funct. Anal. Optim. 27 (2006) 237-257] and [W. Briec, C.D. Horvath, A. Rubinov, Separation in -convexity, Pacific J. Optim. 1 (2005) 13-30]. We show here that all the basic results related to fixed point theorems are available in -convexity. Ky Fan inequality, existence of Nash equilibria and existence of equilibria for abstract economies are established in the framework of -convexity. Monotone analysis, or analysis on Maslov semimodules [V.N. Kolokoltsov, V.P. Maslov, Idempotent Analysis and Its Applications, Math. Appl., volE 401, Kluwer Academic, 1997; V.P. Litvinov, V.P. Maslov, G.B. Shpitz, Idempotent functional analysis: An algebraic approach, Math. Notes 69 (2001) 696-729; V.P. Maslov, S.N. Samborski (Eds.), Idempotent Analysis, Advances in Soviet Mathematics, Amer. Math. Soc., Providence, RI, 1992], is the natural framework for these results. From this point of view Max-Plus convexity and -convexity are isomorphic Maslov semimodules structures over isomorphic semirings. Therefore all the results of this paper hold in the context of Max-Plus convexity.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Scoliosis convexity and organ anatomy are related.
Schlösser, Tom P C; Semple, Tom; Carr, Siobhán B; Padley, Simon; Loebinger, Michael R; Hogg, Claire; Castelein, René M
2017-06-01
Primary ciliary dyskinesia (PCD) is a respiratory syndrome in which 'random' organ orientation can occur; with approximately 46% of patients developing situs inversus totalis at organogenesis. The aim of this study was to explore the relationship between organ anatomy and curve convexity by studying the prevalence and convexity of idiopathic scoliosis in PCD patients with and without situs inversus. Chest radiographs of PCD patients were systematically screened for existence of significant lateral spinal deviation using the Cobb angle. Positive values represented right-sided convexity. Curve convexity and Cobb angles were compared between PCD patients with situs inversus and normal anatomy. A total of 198 PCD patients were screened. The prevalence of scoliosis (Cobb >10°) and significant spinal asymmetry (Cobb 5-10°) was 8 and 23%, respectively. Curve convexity and Cobb angle were significantly different within both groups between situs inversus patients and patients with normal anatomy (P ≤ 0.009). Moreover, curve convexity correlated significantly with organ orientation (P < 0.001; ϕ = 0.882): In 16 PCD patients with scoliosis (8 situs inversus and 8 normal anatomy), except for one case, matching of curve convexity and orientation of organ anatomy was observed: convexity of the curve was opposite to organ orientation. This study supports our hypothesis on the correlation between organ anatomy and curve convexity in scoliosis: the convexity of the thoracic curve is predominantly to the right in PCD patients that were 'randomized' to normal organ anatomy and to the left in patients with situs inversus totalis.
Use of Convexity in Ostomy Care
Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel
2017-01-01
Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes. PMID:28002174
Geometric convex cone volume analysis
NASA Astrophysics Data System (ADS)
Li, Hsiao-Chi; Chang, Chein-I.
2016-05-01
Convexity is a major concept used to design and develop endmember finding algorithms (EFAs). For abundance unconstrained techniques, Pixel Purity Index (PPI) and Automatic Target Generation Process (ATGP) which use Orthogonal Projection (OP) as a criterion, are commonly used method. For abundance partially constrained techniques, Convex Cone Analysis is generally preferred which makes use of convex cones to impose Abundance Non-negativity Constraint (ANC). For abundance fully constrained N-FINDR and Simplex Growing Algorithm (SGA) are most popular methods which use simplex volume as a criterion to impose ANC and Abundance Sum-to-one Constraint (ASC). This paper analyze an issue encountered in volume calculation with a hyperplane introduced to illustrate an idea of bounded convex cone. Geometric Convex Cone Volume Analysis (GCCVA) projects the boundary vectors of a convex cone orthogonally on a hyperplane to reduce the effect of background signatures and a geometric volume approach is applied to address the issue arose from calculating volume and further improve the performance of convex cone-based EFAs.
The effects of a convex rear-view mirror on ocular accommodative responses.
Nagata, Tatsuo; Iwasaki, Tsuneto; Kondo, Hiroyuki; Tawara, Akihiko
2013-11-01
Convex mirrors are universally used as rear-view mirrors in automobiles. However, the ocular accommodative responses during the use of these mirrors have not yet been examined. This study investigated the effects of a convex mirror on the ocular accommodative systems. Seven young adults with normal visual functions were ordered to binocularly watch an object in a convex or plane mirror. The accommodative responses were measured with an infrared optometer. The average of the accommodation of all subjects while viewing the object in the convex mirror were significantly nearer than in the plane mirror, although all subjects perceived the position of the object in the convex mirror as being farther away. Moreover, the fluctuations of accommodation were significantly larger for the convex mirror. The convex mirror caused the 'false recognition of distance', which induced the large accommodative fluctuations and blurred vision. Manufactures should consider the ocular accommodative responses as a new indicator for increasing automotive safety. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Designing Robust and Resilient Tactical MANETs
2014-09-25
Bounds on the Throughput Efficiency of Greedy Maximal Scheduling in Wireless Networks , IEEE/ACM Transactions on Networking , (06 2011): 0. doi: N... Wireless Sensor Networks and Effects of Long Range Dependant Data, Special IWSM Issue of Sequential Analysis, (11 2012): 0. doi: A. D. Dominguez...Bushnell, R. Poovendran. A Convex Optimization Approach for Clone Detection in Wireless Sensor Networks , Pervasive and Mobile Computing, (01 2012
BROJA-2PID: A Robust Estimator for Bivariate Partial Information Decomposition
NASA Astrophysics Data System (ADS)
Makkeh, Abdullah; Theis, Dirk; Vicente, Raul
2018-04-01
Makkeh, Theis, and Vicente found in [8] that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decompostion (BROJA PID) measure [1]. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model. In this paper, we prove the important property of strong duality for the Cone Program and prove an equivalence between the Cone Program and the original Convex problem. Then describe in detail our software and how to use it.\
GTD analysis of airborne antennas radiating in the presence of lossy dielectric layers
NASA Technical Reports Server (NTRS)
Rojas-Teran, R. G.; Burnside, W. D.
1981-01-01
The patterns of monopole or aperture antennas mounted on a perfectly conducting convex surface radiating in the presence of a dielectric or metal plate are computed. The geometrical theory of diffraction is used to analyze the radiating system and extended here to include diffraction by flat dielectric slabs. Modified edge diffraction coefficients valid for wedges whose walls are lossy or lossless thin dielectric or perfectly conducting plates are developed. The width of the dielectric plates cannot exceed a quarter of a wavelength in free space, and the interior angle of the wedge is assumed to be close to 0 deg or 180 deg. Systematic methods for computing the individual components of the total high frequency field are discussed. The accuracy of the solutions is demonstrated by comparisons with measured results, where a 2 lambda by 4 lambda prolate spheroid is used as the convex surface. A jump or kink appears in the calculated pattern when higher order terms that are important are not included in the final solution. The most immediate application of the results presented here is in the modelling of structures such as aircraft which are composed of nonmetallic parts that play a significant role in the pattern.
Revisiting separation properties of convex fuzzy sets
USDA-ARS?s Scientific Manuscript database
Separation of convex sets by hyperplanes has been extensively studied on crisp sets. In a seminal paper separability and convexity are investigated, however there is a flaw on the definition of degree of separation. We revisited separation on convex fuzzy sets that have level-wise (crisp) disjointne...
Kulesza, Joel A.; Solomon, Clell J.; Kiedrowski, Brian C.
2018-01-02
This paper presents a new method for performing angular biasing in Monte Carlo radiation transport codes using arbitrary convex polyhedra to define regions of interest toward which to project particles (DXTRAN regions). The method is derived and is implemented using axis-aligned right parallelepipeds (AARPPs) and arbitrary convex polyhedra. Attention is also paid to possible numerical complications and areas for future refinement. A series of test problems are executed with void, purely absorbing, purely scattering, and 50% absorbing/50% scattering materials. For all test problems tally results using AARPP and polyhedral DXTRAN regions agree with analog and/or spherical DXTRAN results within statisticalmore » uncertainties. In cases with significant scattering the figure of merit (FOM) using AARPP or polyhedral DXTRAN regions is lower than with spherical regions despite the ability to closely fit the tally region. This is because spherical DXTRAN processing is computationally less expensive than AARPP or polyhedral DXTRAN processing. Thus, it is recommended that the speed of spherical regions be considered versus the ability to closely fit the tally region with an AARPP or arbitrary polyhedral region. It is also recommended that short calculations be made prior to final calculations to compare the FOM for the various DXTRAN geometries because of the influence of the scattering behavior.« less
Maximum Margin Clustering of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Use of Convexity in Ostomy Care: Results of an International Consensus Meeting.
Hoeflok, Jo; Salvadalena, Ginger; Pridham, Sue; Droste, Werner; McNichol, Laurie; Gray, Mikel
Ostomy skin barriers that incorporate a convexity feature have been available in the marketplace for decades, but limited resources are available to guide clinicians in selection and use of convex products. Given the widespread use of convexity, and the need to provide practical guidelines for appropriate use of pouching systems with convex features, an international consensus panel was convened to provide consensus-based guidance for this aspect of ostomy practice. Panelists were provided with a summary of relevant literature in advance of the meeting; these articles were used to generate and reach consensus on 26 statements during a 1-day meeting. Consensus was achieved when 80% of panelists agreed on a statement using an anonymous electronic response system. The 26 statements provide guidance for convex product characteristics, patient assessment, convexity use, and outcomes.
Detection of Convexity and Concavity in Context
ERIC Educational Resources Information Center
Bertamini, Marco
2008-01-01
Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…
Ristau, Neil; Siden, Gunnar Leif
2015-07-21
An airfoil includes a leading edge, a trailing edge downstream from the leading edge, a pressure surface between the leading and trailing edges, and a suction surface between the leading and trailing edges and opposite the pressure surface. A first convex section on the suction surface decreases in curvature downstream from the leading edge, and a throat on the suction surface is downstream from the first convex section. A second convex section is on the suction surface downstream from the throat, and a first convex segment of the second convex section increases in curvature.
The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.
ERIC Educational Resources Information Center
Crispen, Patrick
2001-01-01
Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, L; Han, Y; Jin, M
Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Convex formulation of multiple instance learning from positive and unlabeled bags.
Bao, Han; Sakai, Tomoya; Sato, Issei; Sugiyama, Masashi
2018-05-24
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available. MIL has a variety of applications such as content-based image retrieval, text categorization, and medical diagnosis. Most of the previous work for MIL assume that training bags are fully labeled. However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available. A learning framework called PU classification (positive and unlabeled classification) can address this problem. In this paper, we propose a convex PU classification method to solve an MIL problem. We experimentally show that the proposed method achieves better performance with significantly lower computation costs than an existing method for PU-MIL. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
Superiorization with level control
NASA Astrophysics Data System (ADS)
Cegielski, Andrzej; Al-Musallam, Fadhel
2017-04-01
The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.
Hermite-Hadamard type inequality for φ{sub h}-convex stochastic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarıkaya, Mehmet Zeki, E-mail: sarikayamz@gmail.com; Kiriş, Mehmet Eyüp, E-mail: kiris@aku.edu.tr; Çelik, Nuri, E-mail: ncelik@bartin.edu.tr
2016-04-18
The main aim of the present paper is to introduce φ{sub h}-convex stochastic processes and we investigate main properties of these mappings. Moreover, we prove the Hadamard-type inequalities for φ{sub h}-convex stochastic processes. We also give some new general inequalities for φ{sub h}-convex stochastic processes.
A Bayesian observer replicates convexity context effects in figure-ground perception.
Goldreich, Daniel; Peterson, Mary A
2012-01-01
Peterson and Salvagio (2008) demonstrated convexity context effects in figure-ground perception. Subjects shown displays consisting of unfamiliar alternating convex and concave regions identified the convex regions as foreground objects progressively more frequently as the number of regions increased; this occurred only when the concave regions were homogeneously colored. The origins of these effects have been unclear. Here, we present a two-free-parameter Bayesian observer that replicates convexity context effects. The Bayesian observer incorporates two plausible expectations regarding three-dimensional scenes: (1) objects tend to be convex rather than concave, and (2) backgrounds tend (more than foreground objects) to be homogeneously colored. The Bayesian observer estimates the probability that a depicted scene is three-dimensional, and that the convex regions are figures. It responds stochastically by sampling from its posterior distributions. Like human observers, the Bayesian observer shows convexity context effects only for images with homogeneously colored concave regions. With optimal parameter settings, it performs similarly to the average human subject on the four display types tested. We propose that object convexity and background color homogeneity are environmental regularities exploited by human visual perception; vision achieves figure-ground perception by interpreting ambiguous images in light of these and other expected regularities in natural scenes.
Program For Generating Interactive Displays
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Interactive effects of age and exercise on adiposity measures of41,582 physically active women
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T.; Satariano William A.
2004-06-01
The objective of this report is to assess in women whether exercise affects the estimated age-related increase in adiposity, and contrariwise, whether age affects the estimated exercise-related decrease in adiposity. Cross-sectional analyses of 64,911 female runners who provided data on their body mass index (97.6 percent), waist (91.1percent), and chest circumferences (77.9 percent). Age affected the relationships between vigorous exercise and adiposity. The decline in BMI per km/wk run was linear in 18-25 year olds (-0.023+-0.002 kg/m2 perkm run) and became increasingly nonlinear (convex or upwardly concave) with age. The waist, hip and chest circumferences declined significantly with running distancemore » across all age groups, but the declines were 52-58 percent greater in older than younger women (P<10-5). The relationships between body circumferences and running distance became increasingly convexity (upward concavity) in older women. Conversely, vigorous exercise diminished the apparent increase in adiposity with age. The rise in average BMI with age was greatest in women who ran less than 8 km/week (0.065+-0.005 kg/m2 per y), intermediate of women who ran 8-16km/wk (0.025+-0.004kg/m2 per y) or 16-32 km/wk (0.022+-0.003 kg/m2 pery), and least in those who averaged over 32 km/wk (0.017+-0.001 kg/m2 pery). Before age 45, waist circumference rose 0.055+-0.026 cm in for those who ran 0-8 km/wk, showed no significant change for those who ran 8-40km./wk, and declined -0.057+-0.012 and -0.069+-0.014 cm per year in those who ran 40 -56 and over 56 km/wk. The rise in hip and chest circumferences with age were significantly greater in women who ran under eight km/wk than longer distance runners for hip (0.231+-0.018 vs0.136+-0.004 cm/year) and chest circumferences (0.137+-0.013 vs0.053+-0.003 cm/year). These cross-sectional associations suggest that in women, age and vigorous exercise interact with each other in affecting adiposity. The extent that these cross-sectional associations are causally related to vigorous exercise or are the consequence of self-selection remains to be determined.« less
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
Affine invariants of convex polygons.
Flusser, Jan
2002-01-01
In this correspondence, we prove that the affine invariants, for image registration and object recognition, proposed recently by Yang and Cohen (see ibid., vol.8, no.7, p.934-46, July 1999) are algebraically dependent. We show how to select an independent and complete set of the invariants. The use of this new set leads to a significant reduction of the computing complexity without decreasing the discrimination power.
What Makes Industries Strategic
1990-01-01
1988, America’s dollar GNP per employee fell below the average of the next six largest market econo- mies for the first time in this century (chart...manufacturing value added divided by full-time equivalent employees (with and without SIC 35, which contains computers). Chart 2. Productivity in...been available in English. Employees at Convex, a mini-supercomputer maker, had to learn the Japanese alphabet before they realized the opportunity
A Generalized Distance’ Estimation Procedure for Intra-Urban Interaction
Bettinger . It is found that available estimation techniques necessarily result in non-integer solutions. A mathematical device is therefore...The estimation of urban and regional travel patterns has been a necessary part of current efforts to establish land use guidelines for the Texas...paper details computational experience with travel estimation within Corpus Christi, Texas, using a new convex programming approach of Charnes, Raike and
The role of convexity in perception of symmetry and in visual short-term memory.
Bertamini, Marco; Helmy, Mai Salah; Hulleman, Johan
2013-01-01
Visual perception of shape is affected by coding of local convexities and concavities. For instance, a recent study reported that deviations from symmetry carried by convexities were easier to detect than deviations carried by concavities. We removed some confounds and extended this work from a detection of reflection of a contour (i.e., bilateral symmetry), to a detection of repetition of a contour (i.e., translational symmetry). We tested whether any convexity advantage is specific to bilateral symmetry in a two-interval (Experiment 1) and a single-interval (Experiment 2) detection task. In both, we found a convexity advantage only for repetition. When we removed the need to choose which region of the contour to monitor (Experiment 3) the effect disappeared. In a second series of studies, we again used shapes with multiple convex or concave features. Participants performed a change detection task in which only one of the features could change. We did not find any evidence that convexities are special in visual short-term memory, when the to-be-remembered features only changed shape (Experiment 4), when they changed shape and changed from concave to convex and vice versa (Experiment 5), or when these conditions were mixed (Experiment 6). We did find a small advantage for coding convexity as well as concavity over an isolated (and thus ambiguous) contour. The latter is consistent with the known effect of closure on processing of shape. We conclude that convexity plays a role in many perceptual tasks but that it does not have a basic encoding advantage over concavity.
Generalized Bregman distances and convergence rates for non-convex regularization methods
NASA Astrophysics Data System (ADS)
Grasmair, Markus
2010-11-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.
PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of Apollo's 3-dimensional graphics hardware, but does not take advantage of the shading and hidden line/surface removal capabilities of the Apollo DN10000. Although this implementation does not offer a capability for putting text on plots, it does support the use of a mouse to translate, rotate, or zoom in on views. The version 3.6b+ Apollo implementations of PLOT3D (ARC-12789) and PLOT3D/TURB3D (ARC-12785) were developed for use on Apollo computers running UNIX System V with BSD 4.3 extensions and the graphics library GMR3D Version 2.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: 1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); 2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); 3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.
PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.
PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The Apollo implementation of PLOT3D uses some of the capabilities of Apollo's 3-dimensional graphics hardware, but does not take advantage of the shading and hidden line/surface removal capabilities of the Apollo DN10000. Although this implementation does not offer a capability for putting text on plots, it does support the use of a mouse to translate, rotate, or zoom in on views. The version 3.6b+ Apollo implementations of PLOT3D (ARC-12789) and PLOT3D/TURB3D (ARC-12785) were developed for use on Apollo computers running UNIX System V with BSD 4.3 extensions and the graphics library GMR3D Version 2.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: 1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); 2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777, ARC-12781); 3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.
PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In each of these areas, the IRIS implementation of PLOT3D offers advanced features which aid visualization efforts. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are even offered: creation of simple animation sequences without the need for other software; and, creation of files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and can record images to digital disk, video tape, or 16-mm film. The version 3.6b+ SGI implementations of PLOT3D (ARC-12783) and PLOT3D/TURB3D (ARC-12782) were developed for use on Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations. These programs are each distributed on one .25 inch magnetic tape cartridge in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC-12777,ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.
Patel, Priya; Wada, Hironobu; Hu, Hsin-Pei; Hirohashi, Kentaro; Kato, Tatsuya; Ujiie, Hideki; Ahn, Jin Young; Lee, Daiyoon; Geddie, William; Yasufuku, Kazuhiro
2017-04-01
Endobronchial ultrasonography (EBUS)-guided transbronchial needle aspiration allows for sampling of mediastinal lymph nodes. The external diameter, rigidity, and angulation of the convex probe EBUS renders limited accessibility. This study compares the accessibility and transbronchial needle aspiration capability of the prototype thin convex probe EBUS against the convex probe EBUS in human ex vivo lungs rejected for transplant. The prototype thin convex probe EBUS (BF-Y0055; Olympus, Tokyo, Japan) with a thinner tip (5.9 mm), greater upward angle (170 degrees), and decreased forward oblique direction of view (20 degrees) was compared with the current convex probe EBUS (6.9-mm tip, 120 degrees, and 35 degrees, respectively). Accessibility and transbronchial needle aspiration capability was assessed in ex vivo human lungs declined for lung transplant. The distance of maximum reach and sustainable endoscopic limit were measured. Transbronchial needle aspiration capability was assessed using the prototype 25G aspiration needle in segmental lymph nodes. In all evaluated lungs (n = 5), the thin convex probe EBUS demonstrated greater reach and a higher success rate, averaging 22.1 mm greater maximum reach and 10.3 mm further endoscopic visibility range than convex probe EBUS, and could assess selectively almost all segmental bronchi (98% right, 91% left), demonstrating nearly twice the accessibility as the convex probe EBUS (48% right, 47% left). The prototype successfully enabled cytologic assessment of subsegmental lymph nodes with adequate quality using the dedicated 25G aspiration needle. Thin convex probe EBUS has greater accessibility to peripheral airways in human lungs and is capable of sampling segmental lymph nodes using the aspiration needle. That will allow for more precise assessment of N1 nodes and, possibly, intrapulmonary lesions normally inaccessible to the conventional convex probe EBUS. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jakovetic, Dusan; Xavier, João; Moura, José M. F.
2011-08-01
We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x=x^\\star. The objective function of the corresponding optimization problem is the sum of private (known only by a node,) convex, nodes' objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL-G (augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations the effectiveness on two applications: l_1-regularized logistic regression for classification and cooperative spectrum sensing for cognitive radio networks.
Geomorphological control on variably saturated hillslope hydrology and slope instability
Giuseppe, Formetta; Simoni, Silvia; Godt, Jonathan W.; Lu, Ning; Rigon, Riccardo
2016-01-01
In steep topography, the processes governing variably saturated subsurface hydrologic response and the interparticle stresses leading to shallow landslide initiation are physically linked. However, these processes are usually analyzed separately. Here, we take a combined approach, simultaneously analyzing the influence of topography on both hillslope hydrology and the effective stress fields within the hillslope itself. Clearly, runoff and saturated groundwater flow are dominated by gravity and, ultimately, by topography. Less clear is how landscape morphology influences flows in the vadose zone, where transient fluxes are usually taken to be vertical. We aim to assess and quantify the impact of topography on both saturated and unsaturated hillslope hydrology and its effects on shallow slope stability. Three real hillslope morphologies (concave, convex, and planar) are analyzed using a 3-D, physically based, distributed model coupled with a module for computation of the probability of failure, based on the infinite slope assumption. The results of the analyses, which included parameter uncertainty analysis of the results themselves, show that convex and planar slopes are more stable than concave slopes. Specifically, under the same initial, boundary, and infiltration conditions, the percentage of unstable areas ranges from 1.3% for the planar hillslope, 21% for convex, to a maximum value of 33% for the concave morphology. The results are supported by a sensitivity analysis carried out to examine the effect of initial conditions and rainfall intensity.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
Radius of convexity of a certain class of close-to-convex functions
NASA Astrophysics Data System (ADS)
Yahya, Abdullah; Soh, Shaharuddin Cik
2017-11-01
In the present paper, we consider and investigate a certain class of close-to-convex functions that defined in the unit disk, U = {z : |z| < 1}, which denotes as Re { ei αz/f '(z ) f (z )-f (-z ) } >δ where |α| < π, cos (α) > δ and 0 δ <1. Furthermore, we obtain preliminary result for bound f'(z) and determine result for radius of convexity.
2010-12-02
Motzkin, T. and Straus, E. (1965). Maxima for graphs and a new proof of a theorem of Turan . Canad. J. Math. 17 533–540. [33] Rendl, F. and Sotirov, R...Convex Graph Invariants Venkat Chandrasekaran, Pablo A . Parrilo, and Alan S. Willsky ∗ Laboratory for Information and Decision Systems Department of...this paper we study convex graph invariants, which are graph invariants that are convex functions of the adjacency matrix of a graph. Some examples
Montecarlo Simulations for a Lep Experiment with Unix Workstation Clusters
NASA Astrophysics Data System (ADS)
Bonesini, M.; Calegari, A.; Rossi, P.; Rossi, V.
Modular systems of RISC CPU based computers have been implemented for large productions of Montecarlo simulated events for the DELPHI experiment at CERN. From a pilot system based on DEC 5000 CPU’s, a full size system based on a CONVEX C3820 UNIX supercomputer and a cluster of HP 735 workstations has been put into operation as a joint effort between INFN Milano and CILEA.
Distributed Matrix Completion: Applications to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown...computing the leading eigenvectors of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying...generalization of gossip algorithms for consensus. The algorithms outperform state-of-the-art methods in a communication-limited scenario. Positioning via
Application of sound and temperature to control boundary-layer transition
NASA Technical Reports Server (NTRS)
Maestrello, Lucio; Parikh, Paresh; Bayliss, A.; Huang, L. S.; Bryant, T. D.
1987-01-01
The growth and decay of a wave packet convecting in a boundary layer over a concave-convex surface and its active control by localized surface heating are studied numerically using direct computations of the Navier-Stokes equations. The resulting sound radiations are computed using linearized Euler equations with the pressure from the Navier-Stokes solution as a time-dependent boundary condition. It is shown that on the concave portion the amplitude of the wave packet increases and its bandwidth broadens while on the convex portion some of the components in the packet are stabilized. The pressure field decays exponentially away from the surface and then algebraically, exhibiting a decay characteristic of acoustic waves in two dimensions. The far-field acoustic behavior exhibits a super-directivity type of behavior with a beaming downstream. Active control by surface heating is shown to reduce the growth of the wave packet but have little effect on acoustic far field behavior for the cases considered. Active control by sound emanating from the surface of an airfoil in the vicinity of the leading edge is experimentally investigated. The purpose is to control the separated region at high angles of attack. The results show that injection of sound at shedding frequency of the flow is effective in an increase of lift and reduction of drag.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Time-frequency filtering and synthesis from convex projections
NASA Astrophysics Data System (ADS)
White, Langford B.
1990-11-01
This paper describes the application of the theory of projections onto convex sets to time-frequency filtering and synthesis problems. We show that the class of Wigner-Ville Distributions (WVD) of L2 signals form the boundary of a closed convex subset of L2(R2). This result is obtained by considering the convex set of states on the Heisenberg group of which the ambiguity functions form the extreme points. The form of the projection onto the set of WVDs is deduced. Various linear and non-linear filtering operations are incorporated by formulation as convex projections. An example algorithm for simultaneous time-frequency filtering and synthesis is suggested.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
Computational Role of Tunneling in a Programmable Quantum Annealer
NASA Technical Reports Server (NTRS)
Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut
2016-01-01
Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng
2016-01-01
To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more efficiently obtain the near-native protein structure.
CVXPY: A Python-Embedded Modeling Language for Convex Optimization.
Diamond, Steven; Boyd, Stephen
2016-04-01
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
Ohmichi, Takuma; Kondo, Masaki; Itsukage, Masahiro; Koizumi, Hidetaka; Matsushima, Shigenori; Kuriyama, Nagato; Ishii, Kazunari; Mori, Etsuro; Yamada, Kei; Mizuno, Toshiki; Tokuda, Takahiko
2018-03-16
OBJECTIVE The gold standard for the diagnosis of idiopathic normal pressure hydrocephalus (iNPH) is the CSF removal test. For elderly patients, however, a less invasive diagnostic method is required. On MRI, high-convexity tightness was reported to be an important finding for the diagnosis of iNPH. On SPECT, patients with iNPH often show hyperperfusion of the high-convexity area. The authors tested 2 hypotheses regarding the SPECT finding: 1) it is relative hyperperfusion reflecting the increased gray matter density of the convexity, and 2) it is useful for the diagnosis of iNPH. The authors termed the SPECT finding the convexity apparent hyperperfusion (CAPPAH) sign. METHODS Two clinical studies were conducted. In study 1, SPECT was performed for 20 patients suspected of having iNPH, and regional cerebral blood flow (rCBF) of the high-convexity area was examined using quantitative analysis. Clinical differences between patients with the CAPPAH sign (CAP) and those without it (NCAP) were also compared. In study 2, the CAPPAH sign was retrospectively assessed in 30 patients with iNPH and 19 healthy controls using SPECT images and 3D stereotactic surface projection. RESULTS In study 1, rCBF of the high-convexity area of the CAP group was calculated as 35.2-43.7 ml/min/100 g, which is not higher than normal values of rCBF determined by SPECT. The NCAP group showed lower cognitive function and weaker responses to the removal of CSF than the CAP group. In study 2, the CAPPAH sign was positive only in patients with iNPH (24/30) and not in controls (sensitivity 80%, specificity 100%). The coincidence rate between tight high convexity on MRI and the CAPPAH sign was very high (28/30). CONCLUSIONS Patients with iNPH showed hyperperfusion of the high-convexity area on SPECT; however, the presence of the CAPPAH sign did not indicate real hyperperfusion of rCBF in the high-convexity area. The authors speculated that patients with iNPH without the CAPPAH sign, despite showing tight high convexity on MRI, might have comorbidities such as Alzheimer's disease.
The growth of the UniTree mass storage system at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen
1993-01-01
In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw the growth of near-online data from nil to nearly three terabytes, a doubling of the number of CPU's on the facility's Cray YMP (the primary data source for UniTree), and the necessity for an aggressive regimen for repacking sparse tapes and hierarchical 'vaulting' of old files to freestanding tape. Connectivity was enhanced as well with the addition of UltraNet HiPPI. This paper describes the increasing demands placed on the storage system's performance and throughput that resulted from the significant augmentation of compute-server processor power and network speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A
2016-06-15
Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less
Stress analyses for the glass joints of contemporary sodium sulfur batteries
NASA Astrophysics Data System (ADS)
Jung, Keeyoung; Lee, Solki; Kim, Goun; Kim, Chang-Soo
2014-12-01
During the manufacturing and thermal cycles of advanced contemporary large sized sodium sulfur (NaS) batteries, thermally driven stresses can be applied to the glass sealing joints, which may result in catastrophic cell failure. To minimize the thermal stresses at the joints, there is a need to develop a method to properly estimate the maximum thermal stresses by varying the materials properties and shapes of the sealing area, and thereby determine the properties and shapes of sealing material at the joints. In the present study, the optimum coefficient of thermal expansion (CTE) of the glass sealant and end shape of the glass sealing area (i.e., concave, flat, and convex shapes) have been determined using the finite-element analysis (FEA) computation technique. The results showed that the CTE value of 7.8 × 10-6 K-1 with a convex end shape would have the lowest stress concentration in the vicinity of glass sealing joints for the prototype tubular NaS cell design adopted in this work.
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan
2018-05-21
The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.
Direct single-shot phase retrieval from the diffraction pattern of separated objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leshem, Ben; Xu, Rui; Dallal, Yehonatan
The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less
Direct single-shot phase retrieval from the diffraction pattern of separated objects
Leshem, Ben; Xu, Rui; Dallal, Yehonatan; ...
2016-02-22
The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan
2018-01-01
The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410
Beyond union of subspaces: Subspace pursuit on Grassmann manifold for data representation
Shen, Xinyue; Krim, Hamid; Gu, Yuantao
2016-03-01
Discovering the underlying structure of a high-dimensional signal or big data has always been a challenging topic, and has become harder to tackle especially when the observations are exposed to arbitrary sparse perturbations. Here in this paper, built on the model of a union of subspaces (UoS) with sparse outliers and inspired by a basis pursuit strategy, we exploit the fundamental structure of a Grassmann manifold, and propose a new technique of pursuing the subspaces systematically by solving a non-convex optimization problem using the alternating direction method of multipliers. This problem as noted is further complicated by non-convex constraints onmore » the Grassmann manifold, as well as the bilinearity in the penalty caused by the subspace bases and coefficients. Nevertheless, numerical experiments verify that the proposed algorithm, which provides elegant solutions to the sub-problems in each step, is able to de-couple the subspaces and pursue each of them under time-efficient parallel computation.« less
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
CVXPY: A Python-Embedded Modeling Language for Convex Optimization
Diamond, Steven; Boyd, Stephen
2016-01-01
CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples. PMID:27375369
NASA Astrophysics Data System (ADS)
Park, Jisang
In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.
A 'range test' for determining scatterers with unknown physical properties
NASA Astrophysics Data System (ADS)
Potthast, Roland; Sylvester, John; Kusiak, Steven
2003-06-01
We describe a new scheme for determining the convex scattering support of an unknown scatterer when the physical properties of the scatterers are not known. The convex scattering support is a subset of the scatterer and provides information about its location and estimates for its shape. For convex polygonal scatterers the scattering support coincides with the scatterer and we obtain full shape reconstructions. The method will be formulated for the reconstruction of the scatterers from the far field pattern for one or a few incident waves. The method is non-iterative in nature and belongs to the type of recently derived generalized sampling schemes such as the 'no response test' of Luke-Potthast. The range test operates by testing whether it is possible to analytically continue a far field to the exterior of any test domain Omegatest. By intersecting the convex hulls of various test domains we can produce a minimal convex set, the convex scattering support of which must be contained in the convex hull of the support of any scatterer which produces that far field. The convex scattering support is calculated by testing the range of special integral operators for a sampling set of test domains. The numerical results can be used as an approximation for the support of the unknown scatterer. We prove convergence and regularity of the scheme and show numerical examples for sound-soft, sound-hard and medium scatterers. We can apply the range test to non-convex scatterers as well. We can conclude that an Omegatest which passes the range test has a non-empty intersection with the infinity-support (the complement of the unbounded component of the complement of the support) of the true scatterer, but cannot find a minimal set which must be contained therein.
NASA Astrophysics Data System (ADS)
Kadukova, Maria; Grudinin, Sergei
2018-01-01
The 2016 D3R Grand Challenge 2 provided an opportunity to test multiple protein-ligand docking protocols on a set of ligands bound to farnesoid X receptor that has many available experimental structures. We participated in the Stage 1 of the Challenge devoted to the docking pose predictions, with the mean RMSD value of our submission poses of 2.9 Å. Here we present a thorough analysis of our docking predictions made with AutoDock Vina and the Convex-PL rescoring potential by reproducing our submission protocol and running a series of additional molecular docking experiments. We conclude that a correct receptor structure, or more precisely, the structure of the binding pocket, plays the crucial role in the success of our docking studies. We have also noticed the important role of a local ligand geometry, which seems to be not well discussed in literature. We succeed to improve our results up to the mean RMSD value of 2.15-2.33 Å dependent on the models of the ligands, if docking these to all available homologous receptors. Overall, for docking of ligands of diverse chemical series we suggest to perform docking of each of the ligands to a set of multiple receptors that are homologous to the target.
Duality of caustics in Minkowski billiards
NASA Astrophysics Data System (ADS)
Artstein-Avidan, S.; Florentin, D. I.; Ostrover, Y.; Rosen, D.
2018-04-01
In this paper we study convex caustics in Minkowski billiards. We show that for the Euclidean billiard dynamics in a planar smooth, centrally symmetric, strictly convex body K, for every convex caustic which K possesses, the ‘dual’ billiard dynamics in which the table is the Euclidean unit ball and the geometry that governs the motion is induced by the body K, possesses a dual convex caustic. Such a pair of caustics are dual in a strong sense, and in particular they have the same perimeter, Lazutkin parameter (both measured with respect to the corresponding geometries), and rotation number. We show moreover that for general Minkowski billiards this phenomenon fails, and one can construct a smooth caustic in a Minkowski billiard table which possesses no dual convex caustic.
Finding Out Critical Points For Real-Time Path Planning
NASA Astrophysics Data System (ADS)
Chen, Wei
1989-03-01
Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.
Control of sound radiation from a wavepacket over a curved surface
NASA Technical Reports Server (NTRS)
Maestrello, Lucio; El Hady, Nabil M.
1989-01-01
Active control of acoustic pressure in the far field resulting from the growth and decay of a wavepacket convecting in a boundary layer over a concave-convex surface is investigated numerically using direct computations of the Navier-Stokes equations. The resulting sound radiation is computed using linearized Euler equations with the pressure from the Navier-Stokes solution as a time-dependent boundary condition. The acoustic far field exhibits directivity type of behavior that points upstream to the flow direction. A fixed control algorithm is used where the attenuation signal is synthesized by a filter which actively adapt it to the amplitude-time response of the outgoing acoustic wave.
Multi-Stage Convex Relaxation Methods for Machine Learning
2013-03-01
Many problems in machine learning can be naturally formulated as non-convex optimization problems. However, such direct nonconvex formulations have...original nonconvex formulation. We will develop theoretical properties of this method and algorithmic consequences. Related convex and nonconvex machine learning methods will also be investigated.
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
NASA Astrophysics Data System (ADS)
Shen, Xinyue; Gu, Yuantao
2018-06-01
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.
Naini, Farhad B; Donaldson, Ana Nora A; McDonald, Fraser; Cobourne, Martyn T
2012-09-01
The aim was a quantitative evaluation of how the severity of lower facial profile convexity influences perceived attractiveness. The lower facial profile of an idealized image was altered incrementally between 14° to -16°. Images were rated on a Likert scale by orthognathic patients, laypeople, and clinicians. Attractiveness ratings were greater for straight profiles in relation to convex/concave, with no significant difference between convex and concave profiles. Ratings decreased by 0.23 of a level for every degree increase in the convexity angle. Class II/III patients gave significantly reduced ratings of attractiveness and had greater desire for surgery than class I. A straight profile is perceived as most attractive and greater degrees of convexity or concavity deemed progressively less attractive, but a range of 10° to -12° may be deemed acceptable; beyond these values surgical correction is desired. Patients are most critical, and clinicians are more critical than laypeople. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Yongjun; Lu, Zhixin
2017-10-01
Spectrum resources are very precious, so it is increasingly important to locate interference signals rapidly. Convex programming algorithms in wireless sensor networks are often used as localization algorithms. But in view of the traditional convex programming algorithm is too much overlap of wireless sensor nodes that bring low positioning accuracy, the paper proposed a new algorithm. Which is mainly based on the traditional convex programming algorithm, the spectrum car sends unmanned aerial vehicles (uses) that can be used to record data periodically along different trajectories. According to the probability density distribution, the positioning area is segmented to further reduce the location area. Because the algorithm only increases the communication process of the power value of the unknown node and the sensor node, the advantages of the convex programming algorithm are basically preserved to realize the simple and real-time performance. The experimental results show that the improved algorithm has a better positioning accuracy than the original convex programming algorithm.
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
WinSCP for Windows File Transfers | High-Performance Computing | NREL
WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux
NASA Astrophysics Data System (ADS)
Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid
2018-01-01
Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo, DN10000, and GMR3D are trademarks of Hewlett-Packard, Incorporated. System V is a trademark of Bell Labs, Incorporated. BSD4.3 is a trademark of the University of California at Berkeley. UNIX is a registered trademark of AT&T.
PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. In addition to providing the advantages of performing complex calculations on a supercomputer, the Supercomputer/IRIS implementation of PLOT3D offers advanced 3-D, view manipulation, and animation capabilities. Shading and hidden line/surface removal can be used to enhance depth perception and other aspects of the graphical displays. A mouse can be used to translate, rotate, or zoom in on views. Files for several types of output can be produced. Two animation options are available. Simple animation sequences can be created on the IRIS, or,if an appropriately modified version of ARCGRAPH (ARC-12350) is accesible on the supercomputer, files can be created for use in GAS (Graphics Animation System, ARC-12379), an IRIS program which offers more complex rendering and animation capabilities and options for recording images to digital disk, video tape, or 16-mm film. The version 3.6b+ Supercomputer/IRIS implementations of PLOT3D (ARC-12779) and PLOT3D/TURB3D (ARC-12784) are suitable for use on CRAY 2/UNICOS, CONVEX, and ALLIANT computers with a remote Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstation. These programs are distributed on .25 inch magnetic tape cartridges in IRIS TAR format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D workstations (ARC-12783, ARC-12782); (2) VAX computers running VMS Version 5.0 and DISSPLA Version 11.0 (ARC12777, ARC-12781); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 - which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo, DN10000, and GMR3D are trademarks of Hewlett-Packard, Incorporated. System V is a trademark of Bell Labs, Incorporated. BSD4.3 is a trademark of the University of California at Berkeley. UNIX is a registered trademark of AT&T.
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
NQS - NETWORK QUEUING SYSTEM, VERSION 2.0 (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Walter, H.
1994-01-01
The Network Queuing System, NQS, is a versatile batch and device queuing facility for a single Unix computer or a group of networked computers. With the Unix operating system as a common interface, the user can invoke the NQS collection of user-space programs to move batch and device jobs freely around the different computer hardware tied into the network. NQS provides facilities for remote queuing, request routing, remote status, queue status controls, batch request resource quota limits, and remote output return. This program was developed as part of an effort aimed at tying together diverse UNIX based machines into NASA's Numerical Aerodynamic Simulator Processing System Network. This revision of NQS allows for creating, deleting, adding and setting of complexes that aid in limiting the number of requests to be handled at one time. It also has improved device-oriented queues along with some revision of the displays. NQS was designed to meet the following goals: 1) Provide for the full support of both batch and device requests. 2) Support all of the resource quotas enforceable by the underlying UNIX kernel implementation that are relevant to any particular batch request and its corresponding batch queue. 3) Support remote queuing and routing of batch and device requests throughout the NQS network. 4) Support queue access restrictions through user and group access lists for all queues. 5) Enable networked output return of both output and error files to possibly remote machines. 6) Allow mapping of accounts across machine boundaries. 7) Provide friendly configuration and modification mechanisms for each installation. 8) Support status operations across the network, without requiring a user to log in on remote target machines. 9) Provide for file staging or copying of files for movement to the actual execution machine. To support batch and device requests, NQS v.2 implements three queue types--batch, device and pipe. Batch queues hold and prioritize batch requests; device queues hold and prioritize device requests; pipe queues transport both batch and device requests to other batch, device, or pipe queues at local or remote machines. Unique to batch queues are resource quota limits that restrict the amounts of different resources that a batch request can consume during execution. Unique to each device queue is a set of one or more devices, such as a line printer, to which requests can be sent for execution. Pipe queues have associated destinations to which they route and deliver requests. If the proper destination machine is down or unreachable, pipe queues are able to requeue the request and deliver it later when the destination is available. All NQS network conversations are performed using the Berkeley socket mechanism as ported into the respective vendor kernels. NQS is written in C language. The generic UNIX version (ARC-13179) has been successfully implemented on a variety of UNIX platforms, including Sun3 and Sun4 series computers, SGI IRIS computers running IRIX 3.3, DEC computers running ULTRIX 4.1, AMDAHL computers running UTS 1.3 and 2.1, platforms running BSD 4.3 UNIX. The IBM RS/6000 AIX version (COS-10042) is a vendor port. NQS 2.0 will also communicate with the Cray Research, Inc. and Convex, Inc. versions of NQS. The standard distribution medium for either machine version of NQS 2.0 is a 60Mb, QIC-24, .25 inch streaming magnetic tape cartridge in UNIX tar format. Upon request the generic UNIX version (ARC-13179) can be provided in UNIX tar format on alternate media. Please contact COSMIC to discuss the availability and cost of media to meet your specific needs. An electronic copy of the NQS 2.0 documentation is included on the program media. NQS 2.0 was released in 1991. The IBM RS/6000 port of NQS was developed in 1992. IRIX is a trademark of Silicon Graphics Inc. IRIS is a registered trademark of Silicon Graphics Inc. UNIX is a registered trademark of UNIX System Laboratories Inc. Sun3 and Sun4 are trademarks of Sun Microsystems Inc. DEC and ULTRIX are trademarks of Digital Equipment Corporation.
Probabilistic Guidance of Swarms using Sequential Convex Programming
2014-01-01
quadcopter fleet [24]. In this paper, sequential convex programming (SCP) [25] is implemented using model predictive control (MPC) to provide real-time...in order to make Problem 1 convex. The details for convexifying this problem can be found in [26]. The main steps are discretizing the problem using
Rapid figure-ground responses to stereograms reveal an advantage for a convex foreground.
Bertamini, Marco; Lawson, Rebecca
2008-01-01
Convexity has long been recognised as a factor that affects figure - ground segmentation, even when pitted against other factors such as symmetry [Kanizsa and Gerbino, 1976 Art and Artefacts Ed.M Henle (New York: Springer) pp 25-32]. It is accepted in the literature that the difference between concave and convex contours is important for the visual system, and that there is a prior expectation favouring convexities as figure. We used bipartite stimuli and a simple task in which observers had to report whether the foreground was on the left or the right. We report objective evidence that supports the idea that convexity affects figure-ground assignment, even though our stimuli were not pictorial in that depth order was specified unambiguously by binocular disparity.
Computational Methods for Feedback Controllers for Aerodynamics Flow Applications
2007-08-15
Iteration #, and y-translation by: »> Fy=[unf(:,8);runA(:,8);runB(:,8);runC(:,8);runD(:,S); runE (:,8)]; >> Oy-[unf(:,23) ;runA(:,23) ;runB(:,23) ;runC(:,23...runD(:,23) ; runE (:,23)]; >> Iter-[unf(:,1);runA(U ,l);runB(:,l);runC(:,l) ;runD(:,l); runE (:,l)]; >> plot(Fy) Cobalt version 4.0 €blso!,,tic,,. ř-21
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Robust, Adaptive Radar Detection and Estimation
2015-07-21
cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched
Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn
2017-06-01
To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki
2016-12-01
In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.
The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities
NASA Astrophysics Data System (ADS)
Cain, George L., Jr.; González, Luis
2008-02-01
The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
ERIC Educational Resources Information Center
Swanson, David
2011-01-01
We give elementary proofs of formulas for the area and perimeter of a planar convex body surrounded by a band of uniform thickness. The primary tool is a integral formula for the perimeter of a convex body which describes the perimeter in terms of the projections of the body onto lines in the plane.
A path following algorithm for the graph matching problem.
Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe
2009-12-01
We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.
PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P. G.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. If ARCGRAPH (ARC-12350) is installed on the user's VAX, the VMS/DISSPLA version of PLOT3D can also be used to create files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program capable of animating and recording images on film. The version 3.6b+ VMS/DISSPLA implementations of PLOT3D (ARC-12777) and PLOT3D/TURB3D (ARC-12781) were developed for use on VAX computers running VMS Version 5.0 and DISSPLA Version 11.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in DEC VAX BACKUP format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC12782); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.
PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)
NASA Technical Reports Server (NTRS)
Buning, P.
1994-01-01
PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into five groups: 1) Grid Functions for grids, grid-checking, etc.; 2) Scalar Functions for contour or carpet plots of density, pressure, temperature, Mach number, vorticity magnitude, helicity, etc.; 3) Vector Functions for vector plots of velocity, vorticity, momentum, and density gradient, etc.; 4) Particle Trace Functions for rake-like plots of particle flow or vortex lines; and 5) Shock locations based on pressure gradient. TURB3D is a modification of PLOT3D which is used for viewing CFD simulations of incompressible turbulent flow. Input flow data consists of pressure, velocity and vorticity. Typical quantities to plot include local fluctuations in flow quantities and turbulent production terms, plotted in physical or wall units. PLOT3D/TURB3D includes both TURB3D and PLOT3D because the operation of TURB3D is identical to PLOT3D, and there is no additional sample data or printed documentation for TURB3D. Graphical capabilities of PLOT3D version 3.6b+ vary among the implementations available through COSMIC. Customers are encouraged to purchase and carefully review the PLOT3D manual before ordering the program for a specific computer and graphics library. There is only one manual for use with all implementations of PLOT3D, and although this manual generally assumes that the Silicon Graphics Iris implementation is being used, informative comments concerning other implementations appear throughout the text. With all implementations, the visual representation of the object and flow field created by PLOT3D consists of points, lines, and polygons. Points can be represented with dots or symbols, color can be used to denote data values, and perspective is used to show depth. Differences among implementations impact the program's ability to use graphical features that are based on 3D polygons, the user's ability to manipulate the graphical displays, and the user's ability to obtain alternate forms of output. The VAX/VMS/DISSPLA implementation of PLOT3D supports 2-D polygons as well as 2-D and 3-D lines, but does not support graphics features requiring 3-D polygons (shading and hidden line removal, for example). Views can be manipulated using keyboard commands. This version of PLOT3D is potentially able to produce files for a variety of output devices; however, site-specific capabilities will vary depending on the device drivers supplied with the user's DISSPLA library. If ARCGRAPH (ARC-12350) is installed on the user's VAX, the VMS/DISSPLA version of PLOT3D can also be used to create files for use in GAS (Graphics Animation System, ARC-12379), an IRIS program capable of animating and recording images on film. The version 3.6b+ VMS/DISSPLA implementations of PLOT3D (ARC-12777) and PLOT3D/TURB3D (ARC-12781) were developed for use on VAX computers running VMS Version 5.0 and DISSPLA Version 11.0. The standard distribution media for each of these programs is a 9-track, 6250 bpi magnetic tape in DEC VAX BACKUP format. Customers purchasing one implementation version of PLOT3D or PLOT3D/TURB3D will be given a $200 discount on each additional implementation version ordered at the same time. Version 3.6b+ of PLOT3D and PLOT3D/TURB3D are also supported for the following computers and graphics libraries: (1) generic UNIX Supercomputer and IRIS, suitable for CRAY 2/UNICOS, CONVEX, and Alliant with remote IRIS 2xxx/3xxx or IRIS 4D (ARC-12779, ARC-12784); (2) Silicon Graphics IRIS 2xxx/3xxx or IRIS 4D (ARC-12783, ARC12782); (3) generic UNIX and DISSPLA Version 11.0 (ARC-12788, ARC-12778); and (4) Apollo computers running UNIX and GMR3D Version 2.0 (ARC-12789, ARC-12785 which have no capabilities to put text on plots). Silicon Graphics Iris, IRIS 4D, and IRIS 2xxx/3xxx are trademarks of Silicon Graphics Incorporated. VAX and VMS are trademarks of Digital Electronics Corporation. DISSPLA is a trademark of Computer Associates. CRAY 2 and UNICOS are trademarks of CRAY Research, Incorporated. CONVEX is a trademark of Convex Computer Corporation. Alliant is a trademark of Alliant. Apollo and GMR3D are trademarks of Hewlett-Packard, Incorporated. UNIX is a registered trademark of AT&T.
SSL - THE SIMPLE SOCKETS LIBRARY
NASA Technical Reports Server (NTRS)
Campbell, C. E.
1994-01-01
The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Computing the Feasible Spaces of Optimal Power Flow Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molzahn, Daniel K.
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Computing the Feasible Spaces of Optimal Power Flow Problems
Molzahn, Daniel K.
2017-03-15
The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis
Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong
2012-01-01
Standing posterior–anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI–ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm3 and 256.9 ± 42.6 cm3 at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI–ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. PMID:22133294
Anatomical study of the pelvis in patients with adolescent idiopathic scoliosis.
Qiu, Xu-Sheng; Zhang, Jun-Jie; Yang, Shang-Wen; Lv, Feng; Wang, Zhi-Wei; Chiew, Jonathan; Ma, Wei-Wei; Qiu, Yong
2012-02-01
Standing posterior-anterior (PA) radiographs from our clinical practice show that the concave and convex ilia are not always symmetrical in patients with adolescent idiopathic scoliosis (AIS). Transverse pelvic rotation may explain this observation, or pelvic asymmetry may be responsible. The present study investigated pelvic symmetry by examining the volume and linear measurements of the two hipbones in patients with AIS. Forty-two female patients with AIS were recruited for the study. Standing PA radiographs (covering the thoracic and lumbar spinal regions and the entire pelvis), CT scans and 3D reconstructions of the pelvis were obtained for all subjects. The concave/convex ratio of the inferior ilium at the sacroiliac joint medially (SI) and the anterior superior iliac spine laterally (ASIS) were measured on PA radiographs. Hipbone volumes and several distortion and abduction parameters were measured by post-processing software. The concave/convex ratio of SI-ASIS on PA radiographs was 0.97, which was significantly < 1 (P < 0.001). The concave and convex hipbone volumes were comparable in patients with AIS. The hipbone volumes were 257.3 ± 43.5 cm(3) and 256.9 ± 42.6 cm(3) at the concave and convex sides, respectively (P > 0.05). Furthermore, all distortion and abduction parameters were comparable between the convex and concave sides. Therefore, the present study showed that there was no pelvic asymmetry in patients with AIS, although the concave/convex ratio of SI-ASIS on PA radiographs was significantly < 1. The clinical phenomenon of asymmetrical concave and convex ilia in patients with AIS in preoperative standing PA radiographs may be caused by transverse pelvic rotation, but it is not due to developmental asymmetry or distortion of the pelvis. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society.
On the convexity of ROC curves estimated from radiological test results
Pesce, Lorenzo L.; Metz, Charles E.; Berbaum, Kevin S.
2010-01-01
Rationale and Objectives Although an ideal observer’s receiver operating characteristic (ROC) curve must be convex — i.e., its slope must decrease monotonically — published fits to empirical data often display “hooks.” Such fits sometimes are accepted on the basis of an argument that experiments are done with real, rather than ideal, observers. However, the fact that ideal observers must produce convex curves does not imply that convex curves describe only ideal observers. This paper aims to identify the practical implications of non-convex ROC curves and the conditions that can lead to empirical and/or fitted ROC curves that are not convex. Materials and Methods This paper views non-convex ROC curves from historical, theoretical and statistical perspectives, which we describe briefly. We then consider population ROC curves with various shapes and analyze the types of medical decisions that they imply. Finally, we describe how sampling variability and curve-fitting algorithms can produce ROC curve estimates that include hooks. Results We show that hooks in population ROC curves imply the use of an irrational decision strategy, even when the curve doesn’t cross the chance line, and therefore usually are untenable in medical settings. Moreover, we sketch a simple approach to improve any non-convex ROC curve by adding statistical variation to the decision process. Finally, we sketch how to test whether hooks present in ROC data are likely to have been caused by chance alone and how some hooked ROCs found in the literature can be easily explained as fitting artifacts or modeling issues. Conclusion In general, ROC curve fits that show hooks should be looked upon with suspicion unless other arguments justify their presence. PMID:20599155
Growth of benzil crystals by vertical dynamic gradient freeze technique in a transparent furnace
NASA Astrophysics Data System (ADS)
Lan, C. W.; Song, C. R.
1997-09-01
The vertical dynamic gradient freeze technique using a transparent furnace was applied to the growth of benzil single crystals. A flat-bottom ampoule with a <0001> seed was used for growth. During crystal growth, dynamic heating profiles were controlled through a computer, and the growth interface was recorded by a CCD camera. Computer simulation was also conducted, and the calculated convex interface and dynamic growth rate were consistent with the observed ones for various growth conditions. Conditions for growing single crystals were also determined, and they were mainly limited by constitutional supercooling. As the grown crystals were clear in appearance, their optical absorption spectra were insensitive to growth conditions and post-annealing.
Running Jobs on the Peregrine System | High-Performance Computing | NREL
on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts
Congruency effects in dot comparison tasks: convex hull is more important than dot area.
Gilmore, Camilla; Cragg, Lucy; Hogan, Grace; Inglis, Matthew
2016-11-16
The dot comparison task, in which participants select the more numerous of two dot arrays, has become the predominant method of assessing Approximate Number System (ANS) acuity. Creation of the dot arrays requires the manipulation of visual characteristics, such as dot size and convex hull. For the task to provide a valid measure of ANS acuity, participants must ignore these characteristics and respond on the basis of number. Here, we report two experiments that explore the influence of dot area and convex hull on participants' accuracy on dot comparison tasks. We found that individuals' ability to ignore dot area information increases with age and display time. However, the influence of convex hull information remains stable across development and with additional time. This suggests that convex hull information is more difficult to inhibit when making judgements about numerosity and therefore it is crucial to control this when creating dot comparison tasks.
Space ultra-vacuum facility and method of operation
NASA Technical Reports Server (NTRS)
Naumann, Robert J. (Inventor)
1988-01-01
A wake shield space processing facility (10) for maintaining ultra-high levels of vacuum is described. The wake shield (12) is a truncated hemispherical section having a convex side (14) and a concave side (24). Material samples (68) to be processed are located on the convex side of the shield, which faces in the wake direction in operation in orbit. Necessary processing fixtures (20) and (22) are also located on the convex side. Support equipment including power supplies (40, 42), CMG package (46) and electronic control package (44) are located on the convex side (24) of the shield facing the ram direction. Prior to operation in orbit the wake shield is oriented in reverse with the convex side facing the ram direction to provide cleaning by exposure to ambient atomic oxygen. The shield is then baked-out by being pointed directed at the sun to obtain heating for a suitable period.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Unsteady transonic flow analysis for low aspect ratio, pointed wings.
NASA Technical Reports Server (NTRS)
Kimble, K. R.; Ruo, S. Y.; Wu, J. M.; Liu, D. Y.
1973-01-01
Oswatitsch and Keune's parabolic method for steady transonic flow is applied and extended to thin slender wings oscillating in the sonic flow field. The parabolic constant for the wing was determined from the equivalent body of revolution. Laplace transform methods were used to derive the asymptotic equations for pressure coefficient, and the Adams-Sears iterative procedure was employed to solve the equations. A computer program was developed to find the pressure distributions, generalized force coefficients, and stability derivatives for delta, convex, and concave wing planforms.
A Finite Element Analysis of a Class of Problems in Elasto-Plasticity with Hidden Variables.
1985-09-01
RD-R761 642 A FINITE ELEMENT ANALYSIS OF A CLASS OF PROBLEMS IN 1/2 ELASTO-PLASTICITY MIlT (U) TEXAS INST FOR COMPUTATIONAL MECHANICS AUSTIN J T ODEN...end Subtitle) S. TYPE OF REPORT & PERIOD COVERED A FINITE ELEMENT ANALYSIS OF A CLASS OF PROBLEMS Final Report IN ELASTO-PLASTICITY WITH HIDDEN...aieeoc ede It neceeeary nd Identify by block number) ;"Elastoplasticity, finite deformations; non-convex analysis ; finite element methods, metal forming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mota, Alejandro; Tezaur, Irina; Alleman, Coleman
This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.
Mota, Alejandro; Tezaur, Irina; Alleman, Coleman
2017-12-06
This corrigendum clarifies the conditions under which the proof of convergence of Theorem 1 from the original article is valid. We erroneously stated as one of the conditions for the Schwarz alternating method to converge that the energy functional be strictly convex for the solid mechanics problem. Finally, we have relaxed that assumption and changed the corresponding parts of the text. None of the results or other parts of the original article are affected.
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
2013-08-16
problem size n from 10 to 30 with increment 1, and the observation ratio ρ from 0.01 to 0.2 with increment 0.01. For each (ρ, n)-pair, we simulate 5 test ...Foundations of Computational Mathematics, 12(6):805–849, 2012. [CRT] Emmanuel J. Candès, Justin K. Romberg , and Terence Tao. Stable signal recov- ery...2012. [SDS10] Marco Signoretto, Lieven De Lathauwer, and Johan AK Suykens. Nuclear norms for tensors and their use for convex multilinear estimation
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Inhibitory competition in figure-ground perception: context and convexity.
Peterson, Mary A; Salvagio, Elizabeth
2008-12-15
Convexity has long been considered a potent cue as to which of two regions on opposite sides of an edge is the shaped figure. Experiment 1 shows that for a single edge, there is only a weak bias toward seeing the figure on the convex side. Experiments 1-3 show that the bias toward seeing the convex side as figure increases as the number of edges delimiting alternating convex and concave regions increases, provided that the concave regions are homogeneous in color. The results of Experiments 2 and 3 rule out a probability summation explanation for these context effects. Taken together, the results of Experiments 1-3 show that the homogeneity versus heterogeneity of the convex regions is irrelevant. Experiment 4 shows that homogeneity of alternating regions is not sufficient for context effects; a cue that favors the perception of the intervening regions as figures is necessary. Thus homogeneity alone does not alone operate as a background cue. We interpret our results within a model of figure-ground perception in which shape properties on opposite sides of an edge compete for representation and the competitive strength of weak competitors is further reduced when they are homogeneous.
Fowlkes, Charless C.; Banks, Martin S.
2010-01-01
The shape of the contour separating two regions strongly influences judgments of which region is “figure” and which is “ground.” Convexity and other figure–ground cues are generally assumed to indicate only which region is nearer, but nothing about how much the regions are separated in depth. To determine the depth information conveyed by convexity, we examined natural scenes and found that depth steps across surfaces with convex silhouettes are likely to be larger than steps across surfaces with concave silhouettes. In a psychophysical experiment, we found that humans exploit this correlation. For a given binocular disparity, observers perceived more depth when the near surface's silhouette was convex rather than concave. We estimated the depth distributions observers used in making those judgments: they were similar to the natural-scene distributions. Our findings show that convexity should be reclassified as a metric depth cue. They also suggest that the dichotomy between metric and nonmetric depth cues is false and that the depth information provided many cues should be evaluated with respect to natural-scene statistics. Finally, the findings provide an explanation for why figure–ground cues modulate the responses of disparity-sensitive cells in visual cortex. PMID:20505093
Convex Banding of the Covariance Matrix
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
WinHPC System | High-Performance Computing | NREL
System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB
Ternary alloy material prediction using genetic algorithm and cluster expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chong
2015-12-01
This thesis summarizes our study on the crystal structures prediction of Fe-V-Si system using genetic algorithm and cluster expansion. Our goal is to explore and look for new stable compounds. We started from the current ten known experimental phases, and calculated formation energies of those compounds using density functional theory (DFT) package, namely, VASP. The convex hull was generated based on the DFT calculations of the experimental known phases. Then we did random search on some metal rich (Fe and V) compositions and found that the lowest energy structures were body centered cube (bcc) underlying lattice, under which we didmore » our computational systematic searches using genetic algorithm and cluster expansion. Among hundreds of the searched compositions, thirteen were selected and DFT formation energies were obtained by VASP. The stability checking of those thirteen compounds was done in reference to the experimental convex hull. We found that the composition, 24-8-16, i.e., Fe 3VSi 2 is a new stable phase and it can be very inspiring to the future experiments.« less
Adaptive convex combination approach for the identification of improper quaternion processes.
Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P
2014-01-01
Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).
NASA Technical Reports Server (NTRS)
Reynolds, R.; White, C.
1986-01-01
A computer model capable of analyzing the flow field in the transition liner of small gas turbine engines is developed. A FORTRAN code has been assembled from existing codes and physical submodels and used to predict the flow in several test geometries which contain characteristics similar to transition liners, and for which experimental data was available. Comparisons between the predictions and measurements indicate that the code produces qualitative results but that the turbulence models, both K-E and algebraic Reynolds Stress, underestimate the cross-stream diffusion. The code has also been used to perform a numerical experiment to examine the effect of a variety of parameters on the mixing process in transition liners. Comparisons illustrate that geometries with significant curvature show a drift of the jet trajectory toward the convex wall and weaker wake region vortices and decreased penetration for jets located on the convex wall of the liner, when compared to jets located on concave walls. Also shown were the approximate equivalency of angled slots and round holes and a technique by which jet mixing correlations developed for rectangular channels can be used for can geometries.
NASA Astrophysics Data System (ADS)
Osterloh, Andreas
2016-12-01
Here I present a method for how intersections of a certain density matrix of rank 2 with the zero polytope can be calculated exactly. This is a purely geometrical procedure which thereby is applicable to obtaining the zeros of SL- and SU-invariant entanglement measures of arbitrary polynomial degree. I explain this method in detail for a recently unsolved problem. In particular, I show how a three-dimensional view, namely, in terms of the Bloch-sphere analogy, solves this problem immediately. To this end, I determine the zero polytope of the three-tangle, which is an exact result up to computer accuracy, and calculate upper bounds to its convex roof which are below the linearized upper bound. The zeros of the three-tangle (in this case) induced by the zero polytope (zero simplex) are exact values. I apply this procedure to a superposition of the four-qubit Greenberger-Horne-Zeilinger and W state. It can, however, be applied to every case one has under consideration, including an arbitrary polynomial convex-roof measure of entanglement and for arbitrary local dimension.
Boattail juncture shaping for spin-stabilized rounds in supersonic flight
NASA Astrophysics Data System (ADS)
Jiajan, W.; Chue, R. S. M.; Nguyen, T.; Yu, S. C. M.
2015-03-01
In this paper, the effects of boattail junction shaping on aerodynamic drag and stability of supersonic spin-stabilized rounds are investigated using computational fluid dynamics. For a generic round body comprising of a secant-ogive nose, a cylindrical body and a conical boattail, the shaping technique was achieved by adding a convex surface of varying degrees of radius of curvature to the junction between the cylindrical body and the boattail. It was shown through numerical simulations that this shaping technique can provide a reduction in aerodynamic drag of up to 5.4 % without destabilizing the round bodies when the radius of curvature is less than 8.8 times the diameter of the cylindrical body. The more gradual change of the flow characteristics, e.g., the pressure over the convex surface, was identified as the main reason for the drag reduction. A unique aspect of the current work is that stability is treated as an integral part of the performance assessment. It was also found that the dynamic instability encountered at large radii of curvature is due to the Magnus effects.
Algorithms for Maneuvering Spacecraft Around Small Bodies
NASA Technical Reports Server (NTRS)
Acikmese, A. Bechet; Bayard, David
2006-01-01
A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.
A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks
NASA Astrophysics Data System (ADS)
Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.
2008-08-01
Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.
Research on allocation efficiency of the daisy chain allocation algorithm
NASA Astrophysics Data System (ADS)
Shi, Jingping; Zhang, Weiguo
2013-03-01
With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.
Exploring metabolic pathways in genome-scale networks via generating flux modes.
Rezola, A; de Figueiredo, L F; Brock, M; Pey, J; Podhorski, A; Wittmann, C; Schuster, S; Bockmayr, A; Planes, F J
2011-02-15
The reconstruction of metabolic networks at the genome scale has allowed the analysis of metabolic pathways at an unprecedented level of complexity. Elementary flux modes (EFMs) are an appropriate concept for such analysis. However, their number grows in a combinatorial fashion as the size of the metabolic network increases, which renders the application of EFMs approach to large metabolic networks difficult. Novel methods are expected to deal with such complexity. In this article, we present a novel optimization-based method for determining a minimal generating set of EFMs, i.e. a convex basis. We show that a subset of elements of this convex basis can be effectively computed even in large metabolic networks. Our method was applied to examine the structure of pathways producing lysine in Escherichia coli. We obtained a more varied and informative set of pathways in comparison with existing methods. In addition, an alternative pathway to produce lysine was identified using a detour via propionyl-CoA, which shows the predictive power of our novel approach. The source code in C++ is available upon request.
Turbulence Model Predictions of Strongly Curved Flow in a U-Duct
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Gatski, Thomas B.; Morrison, Joseph H.
2000-01-01
The ability of three types of turbulence models to accurately predict the effects of curvature on the flow in a U-duct is studied. An explicit algebraic stress model performs slightly better than one- or two-equation linear eddy viscosity models, although it is necessary to fully account for the variation of the production-to-dissipation-rate ratio in the algebraic stress model formulation. In their original formulations, none of these turbulence models fully captures the suppressed turbulence near the convex wall, whereas a full Reynolds stress model does. Some of the underlying assumptions used in the development of algebraic stress models are investigated and compared with the computed flowfield from the full Reynolds stress model. Through this analysis, the assumption of Reynolds stress anisotropy equilibrium used in the algebraic stress model formulation is found to be incorrect in regions of strong curvature. By the accounting for the local variation of the principal axes of the strain rate tensor, the explicit algebraic stress model correctly predicts the suppressed turbulence in the outer part of the boundary layer near the convex wall.
CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM
NASA Astrophysics Data System (ADS)
Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang
2014-06-01
Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.
Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties
NASA Astrophysics Data System (ADS)
Li, Yongzhe; Vorobyov, Sergiy A.
2018-03-01
In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
NASA Astrophysics Data System (ADS)
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soufi, M; Arimura, H; Toyofuku, F
Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less
NASA Astrophysics Data System (ADS)
Rathsam, Jonathan
This dissertation seeks to advance the current state of computer-based sound field simulations for room acoustics. The first part of the dissertation assesses the reliability of geometric sound-field simulations, which are approximate in nature. The second part of the dissertation uses the rigorous boundary element method (BEM) to learn more about reflections from finite reflectors: planar and non-planar. Acoustical designers commonly use geometric simulations to predict sound fields quickly. Geometric simulation of reflections from rough surfaces is still under refinement. The first project in this dissertation investigates the scattering coefficient, which quantifies the degree of diffuse reflection from rough surfaces. The main result is that predicted reverberation time varies inversely with scattering coefficient if the sound field is nondiffuse. Additional results include a flow chart that enables acoustical designers to gauge how sensitive predicted results are to their choice of scattering coefficient. Geometric acoustics is a high-frequency approximation to wave acoustics. At low frequencies, more pronounced wave phenomena cause deviations between real-world values and geometric predictions. Acoustical designers encounter the limits of geometric acoustics in particular when simulating the low frequency response from finite suspended reflector panels. This dissertation uses the rigorous BEM to develop an improved low-frequency radiation model for smooth, finite reflectors. The improved low frequency model is suggested in two forms for implementation in geometric models. Although BEM simulations require more computation time than geometric simulations, BEM results are highly accurate. The final section of this dissertation uses the BEM to investigate the sound field around non-planar reflectors. The author has added convex edges rounded away from the source side of finite, smooth reflectors to minimize coloration of reflections caused by interference from boundary waves. Although the coloration could not be fully eliminated, the convex edge increases the sound energy reflected into previously nonspecular zones. This excess reflected energy is marginally audible using a standard of 20 dB below direct sound energy. The convex-edged panel is recommended for use when designers want to extend reflected energy spatially beyond the specular reflection zone of a planar panel.
2011-08-01
5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http
On equivalent characterizations of convexity of functions
NASA Astrophysics Data System (ADS)
Gkioulekas, Eleftherios
2013-04-01
A detailed development of the theory of convex functions, not often found in complete form in most textbooks, is given. We adopt the strict secant line definition as the definitive definition of convexity. We then show that for differentiable functions, this definition becomes logically equivalent with the first derivative monotonicity definition and the tangent line definition. Consequently, for differentiable functions, all three characterizations are logically equivalent.
Ukwatta, Eranga; Yuan, Jing; Qiu, Wu; Rajchl, Martin; Chiu, Bernard; Fenster, Aaron
2015-12-01
Three-dimensional (3D) measurements of peripheral arterial disease (PAD) plaque burden extracted from fast black-blood magnetic resonance (MR) images have shown to be more predictive of clinical outcomes than PAD stenosis measurements. To this end, accurate segmentation of the femoral artery lumen and outer wall is required for generating volumetric measurements of PAD plaque burden. Here, we propose a semi-automated algorithm to jointly segment the femoral artery lumen and outer wall surfaces from 3D black-blood MR images, which are reoriented and reconstructed along the medial axis of the femoral artery to obtain improved spatial coherence between slices of the long, thin femoral artery and to reduce computation time. The developed segmentation algorithm enforces two priors in a global optimization manner: the spatial consistency between the adjacent 2D slices and the anatomical region order between the femoral artery lumen and outer wall surfaces. The formulated combinatorial optimization problem for segmentation is solved globally and exactly by means of convex relaxation using a coupled continuous max-flow (CCMF) model, which is a dual formulation to the convex relaxed optimization problem. In addition, the CCMF model directly derives an efficient duality-based algorithm based on the modern multiplier augmented optimization scheme, which has been implemented on a GPU for fast computation. The computed segmentations from the developed algorithm were compared to manual delineations from experts using 20 black-blood MR images. The developed algorithm yielded both high accuracy (Dice similarity coefficients ≥ 87% for both the lumen and outer wall surfaces) and high reproducibility (intra-class correlation coefficient of 0.95 for generating vessel wall area), while outperforming the state-of-the-art method in terms of computational time by a factor of ≈ 20. Copyright © 2015 Elsevier B.V. All rights reserved.
Asymmetric Bulkheads for Cylindrical Pressure Vessels
NASA Technical Reports Server (NTRS)
Ford, Donald B.
2007-01-01
Asymmetric bulkheads are proposed for the ends of vertically oriented cylindrical pressure vessels. These bulkheads, which would feature both convex and concave contours, would offer advantages over purely convex, purely concave, and flat bulkheads (see figure). Intended originally to be applied to large tanks that hold propellant liquids for launching spacecraft, the asymmetric-bulkhead concept may also be attractive for terrestrial pressure vessels for which there are requirements to maximize volumetric and mass efficiencies. A description of the relative advantages and disadvantages of prior symmetric bulkhead configurations is prerequisite to understanding the advantages of the proposed asymmetric configuration: In order to obtain adequate strength, flat bulkheads must be made thicker, relative to concave and convex bulkheads; the difference in thickness is such that, other things being equal, pressure vessels with flat bulkheads must be made heavier than ones with concave or convex bulkheads. Convex bulkhead designs increase overall tank lengths, thereby necessitating additional supporting structure for keeping tanks vertical. Concave bulkhead configurations increase tank lengths and detract from volumetric efficiency, even though they do not necessitate additional supporting structure. The shape of a bulkhead affects the proportion of residual fluid in a tank that is, the portion of fluid that unavoidably remains in the tank during outflow and hence cannot be used. In this regard, a flat bulkhead is disadvantageous in two respects: (1) It lacks a single low point for optimum placement of an outlet and (2) a vortex that forms at the outlet during outflow prevents a relatively large amount of fluid from leaving the tank. A concave bulkhead also lacks a single low point for optimum placement of an outlet. Like purely concave and purely convex bulkhead configurations, the proposed asymmetric bulkhead configurations would be more mass-efficient than is the flat bulkhead configuration. In comparison with both purely convex and purely concave configurations, the proposed asymmetric configurations would offer greater volumetric efficiency. Relative to a purely convex bulkhead configuration, the corresponding asymmetric configuration would result in a shorter tank, thus demanding less supporting structure. An asymmetric configuration provides a low point for optimum location of a drain, and the convex shape at the drain location minimizes the amount of residual fluid.
Influence of implant rod curvature on sagittal correction of scoliosis deformity.
Salmingo, Remel Alingalan; Tadano, Shigeru; Abe, Yuichiro; Ito, Manabu
2014-08-01
Deformation of in vivo-implanted rods could alter the scoliosis sagittal correction. To our knowledge, no previous authors have investigated the influence of implanted-rod deformation on the sagittal deformity correction during scoliosis surgery. To analyze the changes of the implant rod's angle of curvature during surgery and establish its influence on sagittal correction of scoliosis deformity. A retrospective analysis of the preoperative and postoperative implant rod geometry and angle of curvature was conducted. Twenty adolescent idiopathic scoliosis patients underwent surgery. Average age at the time of operation was 14 years. The preoperative and postoperative implant rod angle of curvature expressed in degrees was obtained for each patient. Two implant rods were attached to the concave and convex side of the spinal deformity. The preoperative implant rod geometry was measured before surgical implantation. The postoperative implant rod geometry after surgery was measured by computed tomography. The implant rod angle of curvature at the sagittal plane was obtained from the implant rod geometry. The angle of curvature between the implant rod extreme ends was measured before implantation and after surgery. The sagittal curvature between the corresponding spinal levels of healthy adolescents obtained by previous studies was compared with the implant rod angle of curvature to evaluate the sagittal curve correction. The difference between the postoperative implant rod angle of curvature and normal spine sagittal curvature of the corresponding instrumented level was used to evaluate over or under correction of the sagittal deformity. The implant rods at the concave side of deformity of all patients were significantly deformed after surgery. The average degree of rod deformation Δθ at the concave and convex sides was 15.8° and 1.6°, respectively. The average preoperative and postoperative implant rod angle of curvature at the concave side was 33.6° and 17.8°, respectively. The average preoperative and postoperative implant rod angle of curvature at the convex side was 25.5° and 23.9°, respectively. A significant relationship was found between the degree of rod deformation and preoperative implant rod angle of curvature (r=0.60, p<.005). The implant rods at the convex side of all patients did not have significant deformation. The results indicate that the postoperative sagittal outcome could be predicted from the initial rod shape. Changes in implant rod angle of curvature may lead to over- or undercorrection of the sagittal curve. Rod deformation at the concave side suggests that corrective forces acting on that side are greater than the convex side. Copyright © 2014 Elsevier Inc. All rights reserved.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Nonlinear Analysis of a Bolted Marine Riser Connector Using NASTRAN Substructuring
NASA Technical Reports Server (NTRS)
Fox, G. L.
1984-01-01
Results of an investigation of the behavior of a bolted, flange type marine riser connector is reported. The method used to account for the nonlinear effect of connector separation due to bolt preload and axial tension load is described. The automated multilevel substructing capability of COSMIC/NASTRAN was employed at considerable savings in computer run time. Simplified formulas for computer resources, i.e., computer run times for modules SDCOMP, FBS, and MPYAD, as well as disk storage space, are presented. Actual run time data on a VAX-11/780 is compared with the formulas presented.
Convexity and concavity constants in Lorentz and Marcinkiewicz spaces
NASA Astrophysics Data System (ADS)
Kaminska, Anna; Parrish, Anca M.
2008-07-01
We provide here the formulas for the q-convexity and q-concavity constants for function and sequence Lorentz spaces associated to either decreasing or increasing weights. It yields also the formula for the q-convexity constants in function and sequence Marcinkiewicz spaces. In this paper we extent and enhance the results from [G.J.O. Jameson, The q-concavity constants of Lorentz sequence spaces and related inequalities, Math. Z. 227 (1998) 129-142] and [A. Kaminska, A.M. Parrish, The q-concavity and q-convexity constants in Lorentz spaces, in: Banach Spaces and Their Applications in Analysis, Conference in Honor of Nigel Kalton, May 2006, Walter de Gruyter, Berlin, 2007, pp. 357-373].
Convexity of quantum χ2-divergence.
Hansen, Frank
2011-06-21
The general quantum χ(2)-divergence has recently been introduced by Temme et al. [Temme K, Kastoryano M, Ruskai M, Wolf M, Verstrate F (2010) J Math Phys 51:122201] and applied to quantum channels (quantum Markov processes). The quantum χ(2)-divergence is not unique, as opposed to the classical χ(2)-divergence, but depends on the choice of quantum statistics. It was noticed that the elements in a particular one-parameter family of quantum χ(2)-divergences are convex functions in the density matrices (ρ,σ), thus mirroring the convexity of the classical χ(2)(p,q)-divergence in probability distributions (p,q). We prove that any quantum χ(2)-divergence is a convex function in its two arguments.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian
2013-01-01
Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826
Entropy and convexity for nonlinear partial differential equations
Ball, John M.; Chen, Gui-Qiang G.
2013-01-01
Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue. PMID:24249768
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan
This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Entropy and convexity for nonlinear partial differential equations.
Ball, John M; Chen, Gui-Qiang G
2013-12-28
Partial differential equations are ubiquitous in almost all applications of mathematics, where they provide a natural mathematical description of many phenomena involving change in physical, chemical, biological and social processes. The concept of entropy originated in thermodynamics and statistical physics during the nineteenth century to describe the heat exchanges that occur in the thermal processes in a thermodynamic system, while the original notion of convexity is for sets and functions in mathematics. Since then, entropy and convexity have become two of the most important concepts in mathematics. In particular, nonlinear methods via entropy and convexity have been playing an increasingly important role in the analysis of nonlinear partial differential equations in recent decades. This opening article of the Theme Issue is intended to provide an introduction to entropy, convexity and related nonlinear methods for the analysis of nonlinear partial differential equations. We also provide a brief discussion about the content and contributions of the papers that make up this Theme Issue.
Vickers, Douglas; Lee, Michael D; Dry, Matthew; Hughes, Peter
2003-10-01
The planar Euclidean version of the traveling salesperson problem requires finding the shortest tour through a two-dimensional array of points. MacGregor and Ormerod (1996) have suggested that people solve such problems by using a global-to-local perceptual organizing process based on the convex hull of the array. We review evidence for and against this idea, before considering an alternative, local-to-global perceptual process, based on the rapid automatic identification of nearest neighbors. We compare these approaches in an experiment in which the effects of number of convex hull points and number of potential intersections on solution performance are measured. Performance worsened with more points on the convex hull and with fewer potential intersections. A measure of response uncertainty was unaffected by the number of convex hull points but increased with fewer potential intersections. We discuss a possible interpretation of these results in terms of a hierarchical solution process based on linking nearest neighbor clusters.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
CudaChain: an alternative algorithm for finding 2D convex hulls on the GPU.
Mei, Gang
2016-01-01
This paper presents an alternative GPU-accelerated convex hull algorithm and a novel S orting-based P reprocessing A pproach (SPA) for planar point sets. The proposed convex hull algorithm termed as CudaChain consists of two stages: (1) two rounds of preprocessing performed on the GPU and (2) the finalization of calculating the expected convex hull on the CPU. Those interior points locating inside a quadrilateral formed by four extreme points are first discarded, and then the remaining points are distributed into several (typically four) sub regions. For each subset of points, they are first sorted in parallel; then the second round of discarding is performed using SPA; and finally a simple chain is formed for the current remaining points. A simple polygon can be easily generated by directly connecting all the chains in sub regions. The expected convex hull of the input points can be finally obtained by calculating the convex hull of the simple polygon. The library Thrust is utilized to realize the parallel sorting, reduction, and partitioning for better efficiency and simplicity. Experimental results show that: (1) SPA can very effectively detect and discard the interior points; and (2) CudaChain achieves 5×-6× speedups over the famous Qhull implementation for 20M points.
A parallel Discrete Element Method to model collisions between non-convex particles
NASA Astrophysics Data System (ADS)
Rakotonirina, Andriarimina Daniel; Delenne, Jean-Yves; Wachs, Anthony
2017-06-01
In many dry granular and suspension flow configurations, particles can be highly non-spherical. It is now well established in the literature that particle shape affects the flow dynamics or the microstructure of the particles assembly in assorted ways as e.g. compacity of packed bed or heap, dilation under shear, resistance to shear, momentum transfer between translational and angular motions, ability to form arches and block the flow. In this talk, we suggest an accurate and efficient way to model collisions between particles of (almost) arbitrary shape. For that purpose, we develop a Discrete Element Method (DEM) combined with a soft particle contact model. The collision detection algorithm handles contacts between bodies of various shape and size. For nonconvex bodies, our strategy is based on decomposing a non-convex body into a set of convex ones. Therefore, our novel method can be called "glued-convex method" (in the sense clumping convex bodies together), as an extension of the popular "glued-spheres" method, and is implemented in our own granular dynamics code Grains3D. Since the whole problem is solved explicitly, our fully-MPI parallelized code Grains3D exhibits a very high scalability when dynamic load balancing is not required. In particular, simulations on up to a few thousands cores in configurations involving up to a few tens of millions of particles can readily be performed. We apply our enhanced numerical model to (i) the collapse of a granular column made of convex particles and (i) the microstructure of a heap of non-convex particles in a cylindrical reactor.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.; Merriam, E. W.
1974-01-01
The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.
Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing
1994-07-01
implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
The Impact and Promise of Open-Source Computational Material for Physics Teaching
NASA Astrophysics Data System (ADS)
Christian, Wolfgang
2017-01-01
A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.
Colt: an experiment in wormhole run-time reconfiguration
NASA Astrophysics Data System (ADS)
Bittner, Ray; Athanas, Peter M.; Musgrove, Mark
1996-10-01
Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
A simple smoothness indicator for the WENO scheme with adaptive order
NASA Astrophysics Data System (ADS)
Huang, Cong; Chen, Li Li
2018-01-01
The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
Another convex combination of product states for the separable Werner state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028
2006-03-15
In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less
The concave cusp as a determiner of figure-ground.
Stevens, K A; Brookes, A
1988-01-01
The tendency to interpret as figure, relative to background, those regions that are lighter, smaller, and, especially, more convex is well known. Wherever convex opaque objects abut or partially occlude one another in an image, the points of contact between the silhouettes form concave cusps, each indicating the local assignment of figure versus ground across the contour segments. It is proposed that this local geometric feature is a preattentive determiner of figure-ground perception and that it contributes to the previously observed tendency for convexity preference. Evidence is presented that figure-ground assignment can be determined solely on the basis of the concave cusp feature, and that the salience of the cusp derives from local geometry and not from adjacent contour convexity.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...
2017-12-06
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
A system of nonlinear set valued variational inclusions.
Tang, Yong-Kun; Chang, Shih-Sen; Salahuddin, Salahuddin
2014-01-01
In this paper, we studied the existence theorems and techniques for finding the solutions of a system of nonlinear set valued variational inclusions in Hilbert spaces. To overcome the difficulties, due to the presence of a proper convex lower semicontinuous function ϕ and a mapping g which appeared in the considered problems, we have used the resolvent operator technique to suggest an iterative algorithm to compute approximate solutions of the system of nonlinear set valued variational inclusions. The convergence of the iterative sequences generated by algorithm is also proved. 49J40; 47H06.
Thermophysical properties of hydrogen along the liquid-vapor coexistence
NASA Astrophysics Data System (ADS)
Osman, S. M.; Sulaiman, N.; Bahaa Khedr, M.
2016-05-01
We present Theoretical Calculations for the Liquid-Vapor Coexistence (LVC) curve of fluid Hydrogen within the first order perturbation theory with a suitable first order quantum correction to the free energy. In the present equation of state, we incorporate the dimerization of H2 molecule by treating the fluid as a hard convex body fluid. The thermophysical properties of fluid H2 along the LVC curve, including the pressure-temperature dependence, density-temperature asymmetry, volume expansivity, entropy and enthalpy, are calculated and compared with computer simulation and empirical results.
Diffusion Maps and Geometric Harmonics for Automatic Target Recognition (ATR). Volume 2. Appendices
2007-11-01
of the Perron - Frobenius theorem, it suffices to prove that the chain is irreducible and aperiodic. • The irreducibility is a mere consequence of the...of each atom; this is due to the linear programming constraint that the coefficients be nonnegative 4. Chen et al. [20, 21] describe two algorithms for...projection of x onto the convex cone spanned by Ψ(t) with the origin at the apex; we provide details on computing x̃(t) in Section 4.1.3. Let x̃ (t) H
Convex Optimization Methods for Graphs and Statistical Modeling
2011-06-01
of a set obtained by taking nonnegative linear combinations of elements of the set. The cone TC(x) is the set of directions to points in C from the...Proof. The tangent cone at any signed vector x? with respect to the `∞ ball is a rotation of the nonnegative orthant. Thus we only need to compute the...that ξ(B ?) 1−4ξ(B?)µ(A?) < γ in the second inequality. Sec. A.2. Proofs 167 Proof of Proposition 3.4.2 Based on the Perron - Frobenius theorem [82
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Estrella, Consuelo Amor S; Kind, Karen L; Derks, Anna; Xiang, Ruidong; Faulkner, Nicole; Mohrdick, Melina; Fitzsimmons, Carolyn; Kruk, Zbigniew; Grutzner, Frank; Roberts, Claire T; Hiendleder, Stefan
2017-07-01
Placental function impacts growth and development with lifelong consequences for performance and health. We provide novel insights into placental development in bovine, an important agricultural species and biomedical model. Concepti with defined genetics and sex were recovered from nulliparous dams managed under standardized conditions to study placental gross morphological and histomorphological parameters at the late embryo (Day48) and early accelerated fetal growth (Day153) stages. Placentome number increased 3-fold between Day48 and Day153. Placental barrier thickness was thinner, and volume of placental components, and surface areas and densities were higher at Day153 than Day48. We confirmed two placentome types, flat and convex. At Day48, there were more convex than flat placentomes, and convex placentomes had a lower proportion of maternal connective tissue (P < 0.01). However, this was reversed at Day153, where convex placentomes were lower in number and had greater volume of placental components (P < 0.01- P < 0.001) and greater surface area (P < 0.001) than flat placentomes. Importantly, embryo (r = 0.50) and fetal (r = 0.30) weight correlated with total number of convex but not flat placentomes. Extensive remodelling of the placenta increases capacity for nutrient exchange to support rapidly increasing embryo-fetal weight from Day48 to Day153. The cellular composition of convex placentomes, and exclusive relationships between convex placentome number and embryo-fetal weight, provide strong evidence for these placentomes as drivers of prenatal growth. The difference in proportion of maternal connective tissue between placentome types at Day48 suggests that this tissue plays a role in determining placentome shape, further highlighting the importance of early placental development. Copyright © 2017 Elsevier Ltd. All rights reserved.
1990-01-01
Verlag 1976. 17. C. G. Lekkerkerker, Geometry of Numbers, Wolters-Noordhoff, Groningen, 1969. 18. E . Lutwak , "Dual Mixed Volumes," Pacific Journal of...Mathematics, Vol. 58, No. 2, 1975. 19. E . Lutwak , "On Cross-Sectional Measures of Polar Reciprocal Convex Bodies," Geometriae Dedicata 5, (1976) 79-80...20. E . Lutwak , "Blaschke-Santal6 Inequality, Discrete Geometry and Convexity," Annals of the New York Academy of Sciences 440 (1985) pp 106-112. 21. V
Liu, Chuyu [Newport News, VA; Zhang, Shukui [Yorktown, VA
2011-10-04
A single lens bullet-shaped laser beam shaper capable of redistributing an arbitrary beam profile into any desired output profile comprising a unitary lens comprising: a convex front input surface defining a focal point and a flat output portion at the focal point; and b) a cylindrical core portion having a flat input surface coincident with the flat output portion of the first input portion at the focal point and a convex rear output surface remote from the convex front input surface.
Compressed quantum computation using a remote five-qubit quantum computer
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Alsina, D.; Latorre, J. I.; Kraus, B.
2017-05-01
The notion of compressed quantum computation is employed to simulate the Ising interaction of a one-dimensional chain consisting of n qubits using the universal IBM cloud quantum computer running on log2(n ) qubits. The external field parameter that controls the quantum phase transition of this model translates into particular settings of the quantum gates that generate the circuit. We measure the magnetization, which displays the quantum phase transition, on a two-qubit system, which simulates a four-qubit Ising chain, and show its agreement with the theoretical prediction within a certain error. We also discuss the relevant point of how to assess errors when using a cloud quantum computer with a limited amount of runs. As a solution, we propose to use validating circuits, that is, to run independent controlled quantum circuits of similar complexity to the circuit of interest.
NASA Astrophysics Data System (ADS)
Dmitriev, V. G.
1982-04-01
It is proved that a hypersurface f imbedded in \\mathbf{R}^{n + 1}, n \\geq 2, which is locally convex at all points except for a closed set E with (n - 1)-dimensional Hausdorff measure \\mathcal{K}_{n - 1}(E) = 0, and strictly convex near E is in fact locally convex everywhere. The author also gives various corollaries. In particular, let M be a complete two-dimensional Riemannian manifold of nonnegative curvature K and E \\subset M a closed subset for which \\mathcal{K}_1(E) = 0. Assume further that there exists a neighborhood U \\supset E such that K(x) > 0 for x \\in U \\setminus E, f \\colon M \\to \\mathbf{R}^3 is such that f\\big\\vert _{U \\setminus E} is an imbedding, and f\\big\\vert _{M \\setminus E} \\in C^{1, \\alpha}, \\alpha > 2/3. Then f(M) is a complete convex surface in \\mathbf{R}^3. This result is an generalization of results in the paper reviewed in MR 51 # 11374.Bibliography: 19 titles.
Turbulent boundary layers subjected to multiple curvatures and pressure gradients
NASA Technical Reports Server (NTRS)
Bandyopadhyay, Promode R.; Ahmed, Anwar
1993-01-01
The effects of abruptly applied cycles of curvatures and pressure gradients on turbulent boundary layers are examined experimentally. Two two-dimensional curved test surfaces are considered: one has a sequence of concave and convex longitudinal surface curvatures and the other has a sequence of convex and concave curvatures. The choice of the curvature sequences were motivated by a desire to study the asymmetric response of turbulent boundary layers to convex and concave curvatures. The relaxation of a boundary layer from the effects of these two opposite sequences has been compared. The effect of the accompaying sequences of pressure gradient has also been examined but the effect of curvature dominates. The growth of internal layers at the curvature junctions have been studied. Measurements of the Gortler and corner vortex systems have been made. The boundary layer recovering from the sequence of concave to convex curvature has a sustained lower skin friction level than in that recovering from the sequence of convex to concave curvature. The amplification and suppression of turbulence due to the curvature sequences have also been studied.
Kirkwood-Buff integrals of finite systems: shape effects
NASA Astrophysics Data System (ADS)
Dawass, Noura; Krüger, Peter; Simon, Jean-Marc; Vlugt, Thijs J. H.
2018-06-01
The Kirkwood-Buff (KB) theory provides an important connection between microscopic density fluctuations in liquids and macroscopic properties. Recently, Krüger et al. derived equations for KB integrals for finite subvolumes embedded in a reservoir. Using molecular simulation of finite systems, KB integrals can be computed either from density fluctuations inside such subvolumes, or from integrals of radial distribution functions (RDFs). Here, based on the second approach, we establish a framework to compute KB integrals for subvolumes with arbitrary convex shapes. This requires a geometric function w(x) which depends on the shape of the subvolume, and the relative position inside the subvolume. We present a numerical method to compute w(x) based on Umbrella Sampling Monte Carlo (MC). We compute KB integrals of a liquid with a model RDF for subvolumes with different shapes. KB integrals approach the thermodynamic limit in the same way: for sufficiently large volumes, KB integrals are a linear function of area over volume, which is independent of the shape of the subvolume.
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
Non-convex dissipation potentials in multiscale non-equilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Janečka, Adam; Pavelka, Michal
2018-04-01
Reformulating constitutive relation in terms of gradient dynamics (being derivative of a dissipation potential) brings additional information on stability, metastability and instability of the dynamics with respect to perturbations of the constitutive relation, called CR-stability. CR-instability is connected to the loss of convexity of the dissipation potential, which makes the Legendre-conjugate dissipation potential multivalued and causes dissipative phase transitions that are not induced by non-convexity of free energy, but by non-convexity of the dissipation potential. CR-stability of the constitutive relation with respect to perturbations is then manifested by constructing evolution equations for the perturbations in a thermodynamically sound way (CR-extension). As a result, interesting experimental observations of behavior of complex fluids under shear flow and supercritical boiling curve can be explained.
Counterfactual quantum computation through quantum interrogation
NASA Astrophysics Data System (ADS)
Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.
2006-02-01
The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.
4273π: Bioinformatics education on low cost ARM hardware
2013-01-01
Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194
4273π: bioinformatics education on low cost ARM hardware.
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
2013-08-12
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Statistical fingerprinting for malware detection and classification
Prowell, Stacy J.; Rathgeb, Christopher T.
2015-09-15
A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.
Blind image fusion for hyperspectral imaging with the directional total variation
NASA Astrophysics Data System (ADS)
Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane
2018-04-01
Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.
Distortion outage minimization in Nakagami fading using limited feedback
NASA Astrophysics Data System (ADS)
Wang, Chih-Hong; Dey, Subhrakanti
2011-12-01
We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.
Filtered-x generalized mixed norm (FXGMN) algorithm for active noise control
NASA Astrophysics Data System (ADS)
Song, Pucha; Zhao, Haiquan
2018-07-01
The standard adaptive filtering algorithm with a single error norm exhibits slow convergence rate and poor noise reduction performance under specific environments. To overcome this drawback, a filtered-x generalized mixed norm (FXGMN) algorithm for active noise control (ANC) system is proposed. The FXGMN algorithm is developed by using a convex mixture of lp and lq norms as the cost function that it can be viewed as a generalized version of the most existing adaptive filtering algorithms, and it will reduce to a specific algorithm by choosing certain parameters. Especially, it can be used to solve the ANC under Gaussian and non-Gaussian noise environments (including impulsive noise with symmetric α -stable (SαS) distribution). To further enhance the algorithm performance, namely convergence speed and noise reduction performance, a convex combination of the FXGMN algorithm (C-FXGMN) is presented. Moreover, the computational complexity of the proposed algorithms is analyzed, and a stability condition for the proposed algorithms is provided. Simulation results show that the proposed FXGMN and C-FXGMN algorithms can achieve better convergence speed and higher noise reduction as compared to other existing algorithms under various noise input conditions, and the C-FXGMN algorithm outperforms the FXGMN.
Real time gesture based control: A prototype development
NASA Astrophysics Data System (ADS)
Bhargava, Deepshikha; Solanki, L.; Rai, Satish Kumar
2016-03-01
The computer industry is getting advanced. In a short span of years, industry is growing high with advanced techniques. Robots have been replacing humans, increasing the efficiency, accessibility and accuracy of the system and creating man-machine interaction. Robotic industry is developing many new trends. However, they still need to be controlled by humans itself. This paper presents an approach to control a motor like a robot with hand gestures not by old ways like buttons or physical devices. Controlling robots with hand gestures is very popular now-a-days. Currently, at this level, gesture features are applied for detecting and tracking the hand in real time. A principal component analysis algorithm is being used for identification of a hand gesture by using open CV image processing library. Contours, convex-hull, and convexity defects are the gesture features. PCA is a statistical approach used for reducing the number of variables in hand recognition. While extracting the most relevant information (feature) contained in the images (hand). After detecting and recognizing hand a servo motor is being controlled, which uses hand gesture as an input device (like mouse and keyboard), and reduces human efforts.
Precision platform for convex lens-induced confinement microscopy
NASA Astrophysics Data System (ADS)
Berard, Daniel; McFaul, Christopher M. J.; Leith, Jason S.; Arsenault, Adriel K. J.; Michaud, François; Leslie, Sabrina R.
2013-10-01
We present the conception, fabrication, and demonstration of a versatile, computer-controlled microscopy device which transforms a standard inverted fluorescence microscope into a precision single-molecule imaging station. The device uses the principle of convex lens-induced confinement [S. R. Leslie, A. P. Fields, and A. E. Cohen, Anal. Chem. 82, 6224 (2010)], which employs a tunable imaging chamber to enhance background rejection and extend diffusion-limited observation periods. Using nanopositioning stages, this device achieves repeatable and dynamic control over the geometry of the sample chamber on scales as small as the size of individual molecules, enabling regulation of their configurations and dynamics. Using microfluidics, this device enables serial insertion as well as sample recovery, facilitating temporally controlled, high-throughput measurements of multiple reagents. We report on the simulation and experimental characterization of this tunable chamber geometry, and its influence upon the diffusion and conformations of DNA molecules over extended observation periods. This new microscopy platform has the potential to capture, probe, and influence the configurations of single molecules, with dramatically improved imaging conditions in comparison to existing technologies. These capabilities are of immediate interest to a wide range of research and industry sectors in biotechnology, biophysics, materials, and chemistry.
Noncontact methods for optical testing of convex aspheric mirrors for future large telescopes
NASA Astrophysics Data System (ADS)
Goncharov, Alexander V.; Druzhin, Vladislav V.; Batshev, Vladislav I.
2009-06-01
Non-contact methods for testing of large rotationally symmetric convex aspheric mirrors are proposed. These methods are based on non-null testing with side illumination schemes, in which a narrow collimated beam is reflected from the meridional aspheric profile of a mirror. The figure error of the mirror is deduced from the intensity pattern from the reflected beam obtained on a screen, which is positioned in the tangential plane (containing the optical axis) and perpendicular to the incoming beam. Testing of the entire surface is carried out by rotating the mirror about its optical axis and registering the characteristics of the intensity pattern on the screen. The intensity pattern can be formed using three different techniques: modified Hartman test, interference and boundary curve test. All these techniques are well known but have not been used in the proposed side illumination scheme. Analytical expressions characterizing the shape and location of the intensity pattern on the screen or a CCD have been developed for all types of conic surfaces. The main advantage of these testing methods compared with existing methods (Hindle sphere, null lens, computer generated hologram) is that the reference system does not require large optical components.
An oscillation free shock-capturing method for compressible van der Waals supercritical fluid flows
Pantano, C.; Saurel, R.; Schmitt, T.
2017-02-01
Numerical solutions of the Euler equations using real gas equations of state (EOS) often exhibit serious inaccuracies. The focus here is the van der Waals EOS and its variants (often used in supercritical fluid computations). The problems are not related to a lack of convexity of the EOS since the EOS are considered in their domain of convexity at any mesh point and at any time. The difficulties appear as soon as a density discontinuity is present with the rest of the fluid in mechanical equilibrium and typically result in spurious pressure and velocity oscillations. This is reminiscent of well-knownmore » pressure oscillations occurring with ideal gas mixtures when a mass fraction discontinuity is present, which can be interpreted as a discontinuity in the EOS parameters. We are concerned with pressure oscillations that appear just for a single fluid each time a density discontinuity is present. As a result, the combination of density in a nonlinear fashion in the EOS with diffusion by the numerical method results in violation of mechanical equilibrium conditions which are not easy to eliminate, even under grid refinement.« less
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
An oscillation free shock-capturing method for compressible van der Waals supercritical fluid flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantano, C.; Saurel, R.; Schmitt, T.
Numerical solutions of the Euler equations using real gas equations of state (EOS) often exhibit serious inaccuracies. The focus here is the van der Waals EOS and its variants (often used in supercritical fluid computations). The problems are not related to a lack of convexity of the EOS since the EOS are considered in their domain of convexity at any mesh point and at any time. The difficulties appear as soon as a density discontinuity is present with the rest of the fluid in mechanical equilibrium and typically result in spurious pressure and velocity oscillations. This is reminiscent of well-knownmore » pressure oscillations occurring with ideal gas mixtures when a mass fraction discontinuity is present, which can be interpreted as a discontinuity in the EOS parameters. We are concerned with pressure oscillations that appear just for a single fluid each time a density discontinuity is present. As a result, the combination of density in a nonlinear fashion in the EOS with diffusion by the numerical method results in violation of mechanical equilibrium conditions which are not easy to eliminate, even under grid refinement.« less
HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.
2017-10-01
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
High Resolution Nature Runs and the Big Data Challenge
NASA Technical Reports Server (NTRS)
Webster, W. Phillip; Duffy, Daniel Q.
2015-01-01
NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei
2014-11-01
Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less
Improved flight-simulator viewing lens
NASA Technical Reports Server (NTRS)
Kahlbaum, W. M.
1979-01-01
Triplet lens system uses two acrylic plastic double convex lenses and one polystyrene plastic single convex lens to reduce chromatic distortion and lateral aberation, especially at large field angles within in-line systems of flight simulators.
Stereotype locally convex spaces
NASA Astrophysics Data System (ADS)
Akbarov, S. S.
2000-08-01
We give complete proofs of some previously announced results in the theory of stereotype (that is, reflexive in the sense of Pontryagin duality) locally convex spaces. These spaces have important applications in topological algebra and functional analysis.
Interface Shape Control Using Localized Heating during Bridgman Growth
NASA Technical Reports Server (NTRS)
Volz, M. P.; Mazuruk, K.; Aggarwal, M. D.; Croll, A.
2008-01-01
Numerical calculations were performed to assess the effect of localized radial heating on the melt-crystal interface shape during vertical Bridgman growth. System parameters examined include the ampoule, melt and crystal thermal conductivities, the magnitude and width of localized heating, and the latent heat of crystallization. Concave interface shapes, typical of semiconductor systems, could be flattened or made convex with localized heating. Although localized heating caused shallower thermal gradients ahead of the interface, the magnitude of the localized heating required for convexity was less than that which resulted in a thermal inversion ahead of the interface. A convex interface shape was most readily achieved with ampoules of lower thermal conductivity. Increasing melt convection tended to flatten the interface, but the amount of radial heating required to achieve a convex interface was essentially independent of the convection intensity.
Pin stack array for thermoacoustic energy conversion
Keolian, Robert M.; Swift, Gregory W.
1995-01-01
A thermoacoustic stack for connecting two heat exchangers in a thermoacoustic energy converter provides a convex fluid-solid interface in a plane perpendicular to an axis for acoustic oscillation of fluid between the two heat exchangers. The convex surfaces increase the ratio of the fluid volume in the effective thermoacoustic volume that is displaced from the convex surface to the fluid volume that is adjacent the surface within which viscous energy losses occur. Increasing the volume ratio results in an increase in the ratio of transferred thermal energy to viscous energy losses, with a concomitant increase in operating efficiency of the thermoacoustic converter. The convex surfaces may be easily provided by a pin array having elements arranged parallel to the direction of acoustic oscillations and with effective radial dimensions much smaller than the thicknesses of the viscous energy loss and thermoacoustic energy transfer volumes.
Mount, D W; Conrad, B
1986-01-01
We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780
NASA Astrophysics Data System (ADS)
Decyk, Viktor K.; Dauger, Dean E.
We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.
Numerical models for fluid-grains interactions: opportunities and limitations
NASA Astrophysics Data System (ADS)
Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony
2017-06-01
In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.
Shear thickening and jamming in suspensions of different particle shapes
NASA Astrophysics Data System (ADS)
Brown, Eric; Zhang, Hanjun; Forman, Nicole; Betts, Douglas; Desimone, Joseph; Maynor, Benjamin; Jaeger, Heinrich
2012-02-01
We investigated the role of particle shape on shear thickening and jamming in densely packed suspensions. Various particle shapes were fabricated including rods of different aspect ratios and non-convex hooked rods. A rheometer was used to measure shear stress vs. shear rate for a wide range of packing fractions for each shape. Each suspensions exhibits qualitatively similar Discontinuous Shear Thickening, in which the logarithmic slope of the stress vs. shear rate has the same scaling for each convex shape and diverges at a critical packing fraction φc. The value of φc varies with particle shape, and coincides with the onset of a yield stress, a.k.a. the jamming transition. This suggests the jamming transition controls shear thickening, and the only effect of particle shape on steady state bulk rheology of convex particles is a shift of φc. Intriguingly, viscosity curves for non-convex particles do not collapse on the same set as convex particles, showing strong shear thickening over a wider range of packing fraction. Qualitative shape dependence was only found in steady state rheology when the system was confined to small gaps where large aspect ratio particle are forced to order.
Convex Curved Crystal Spectograph for Pulsed Plasma Sources.
The geometry of a convex curved crystal spectrograph as applied to pulsed plasma sources is presented. Also presented are data from the dense plasma focus with particular emphasis on the absolute intensity of line radiations.
Using certification trails to achieve software fault tolerance
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Masson, Gerald M.
1993-01-01
A conceptually novel and powerful technique to achieve fault tolerance in hardware and software systems is introduced. When used for software fault tolerance, this new technique uses time and software redundancy and can be outlined as follows. In the initial phase, a program is run to solve a problem and store the result. In addition, this program leaves behind a trail of data called a certification trail. In the second phase, another program is run which solves the original problem again. This program, however, has access to the certification trail left by the first program. Because of the availability of the certification trail, the second phase can be performed by a less complex program and can execute more quickly. In the final phase, the two results are accepted as correct; otherwise an error is indicated. An essential aspect of this approach is that the second program must always generate either an error indication or a correct output even when the certification trail it receives from the first program is incorrect. The certification trail approach to fault tolerance was formalized and it was illustrated by applying it to the fundamental problem of finding a minimum spanning tree. Cases in which the second phase can be run concorrectly with the first and act as a monitor are discussed. The certification trail approach was compared to other approaches to fault tolerance. Because of space limitations we have omitted examples of our technique applied to the Huffman tree, and convex hull problems. These can be found in the full version of this paper.
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
A Quantum Computing Approach to Model Checking for Advanced Manufacturing Problems
2014-07-01
amount of time. In summary, the tool we developed succeeded in allowing us to produce good solutions for optimization problems that did not fit ...We compared the value of the objective obtained in each run with the known optimal value, and used this information to compute the probability of ...success for each given instance. Then we used this information to compute the expected number of repetitions (or runs) needed to obtain the optimal
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Optimal boundary regularity for a singular Monge-Ampère equation
NASA Astrophysics Data System (ADS)
Jian, Huaiyu; Li, You
2018-06-01
In this paper we study the optimal global regularity for a singular Monge-Ampère type equation which arises from a few geometric problems. We find that the global regularity does not depend on the smoothness of domain, but it does depend on the convexity of the domain. We introduce (a , η) type to describe the convexity. As a result, we show that the more convex is the domain, the better is the regularity of the solution. In particular, the regularity is the best near angular points.
Compliant tactile sensor that delivers a force vector
NASA Technical Reports Server (NTRS)
Torres-Jara, Eduardo (Inventor)
2010-01-01
Tactile Sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector. The applied force vector has three components to establish the direction and magnitude of an applied force. The compliant convex surface defines a dome with a hollow interior and has a linear relation between displacement and load including a magnet disposed substantially at the center of the dome above a sensor array that responds to magnetic field intensity.
Convexity of level lines of Martin functions and applications
NASA Astrophysics Data System (ADS)
Gallagher, A.-K.; Lebl, J.; Ramachandran, K.
2018-01-01
Let Ω be an unbounded domain in R× Rd. A positive harmonic function u on Ω that vanishes on the boundary of Ω is called a Martin function. In this note, we show that, when Ω is convex, the superlevel sets of a Martin function are also convex. As a consequence we obtain that if in addition Ω has certain symmetry with respect to the t-axis, and partial Ω is sufficiently flat, then the maximum of any Martin function along a slice Ω \\cap ({t}× R^d) is attained at (t, 0).
The Compressible Stokes Flows with No-Slip Boundary Condition on Non-Convex Polygons
NASA Astrophysics Data System (ADS)
Kweon, Jae Ryong
2017-03-01
In this paper we study the compressible Stokes equations with no-slip boundary condition on non-convex polygons and show a best regularity result that the solution can have without subtracting corner singularities. This is obtained by a suitable Helmholtz decomposition: {{{u}}={{w}}+nablaφ_R} with div w = 0 and a potential φ_R. Here w is the solution for the incompressible Stokes problem and φ_R is defined by subtracting from the solution of the Neumann problem the leading two corner singularities at non-convex vertices.
Running Batch Jobs on Peregrine | High-Performance Computing | NREL
Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes incompatibility and get the job running. More information about requesting different node types in Peregrine is available. Queues In order to meet the needs of different types of jobs, nodes on Peregrine are available
Host-Nation Operations: Soldier Training on Governance (HOST-G) Training Support Package
2011-07-01
restricted this webpage from running scripts or ActiveX controls that could access your computer. Click here for options…” • If this occurs, select that...scripts and ActiveX controls can be useful, but active content might also harm your computer. Are you sure you want to let this file run active
24 CFR 15.110 - What fees will HUD charge?
Code of Federal Regulations, 2013 CFR
2013-04-01
... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...
NASA Technical Reports Server (NTRS)
Roberts, Floyd E., III
1994-01-01
Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
Identification of Program Signatures from Cloud Computing System Telemetry Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.
Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less
Worst case estimation of homology design by convex analysis
NASA Technical Reports Server (NTRS)
Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.
1998-01-01
The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.
QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2014-01-01
We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n -gon, our construction produces 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n ( n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called 'serendipity' elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed.
Transient disturbance growth in flows over convex surfaces
NASA Astrophysics Data System (ADS)
Karp, Michael; Hack, M. J. Philipp
2017-11-01
Flows over curved surfaces occur in a wide range of applications including airfoils, compressor and turbine vanes as well as aerial, naval and ground vehicles. In most of these applications the surface has convex curvature, while concave surfaces are less common. Since monotonic boundary-layer flows over convex surfaces are exponentially stable, they have received considerably less attention than flows over concave walls which are destabilized by centrifugal forces. Non-modal mechanisms may nonetheless enable significant disturbance growth which can make the flow susceptible to secondary instabilities. A parametric investigation of the transient growth and secondary instability of flows over convex surfaces is performed. The specific conditions yielding the maximal transient growth and strongest instability are identified. The effect of wall-normal and spanwise inflection points on the instability process is discussed. Finally, the role and significance of additional parameters, such as the geometry and pressure gradient, is analyzed.
Anomalous dynamics triggered by a non-convex equation of state in relativistic flows
NASA Astrophysics Data System (ADS)
Ibáñez, J. M.; Marquina, A.; Serna, S.; Aloy, M. A.
2018-05-01
The non-monotonicity of the local speed of sound in dense matter at baryon number densities much higher than the nuclear saturation density (n0 ≈ 0.16 fm-3) suggests the possible existence of a non-convex thermodynamics which will lead to a non-convex dynamics. Here, we explore the rich and complex dynamics that an equation of state (EoS) with non-convex regions in the pressure-density plane may develop as a result of genuinely relativistic effects, without a classical counterpart. To this end, we have introduced a phenomenological EoS, the parameters of which can be restricted owing to causality and thermodynamic stability constraints. This EoS can be regarded as a toy model with which we may mimic realistic (and far more complex) EoSs of practical use in the realm of relativistic hydrodynamics.
Providing Assistive Technology Applications as a Service Through Cloud Computing.
Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio
2015-01-01
Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.
Signal processing using sparse derivatives with applications to chromatograms and ECG
NASA Astrophysics Data System (ADS)
Ning, Xiaoran
In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. At the end, the algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109452 anotations), resulting a sensitivity of Se = 99.87%$ and a positive prediction of +P = 99.88%.
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1994-01-01
MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.
Computer Simulation of Great Lakes-St. Lawrence Seaway Icebreaker Requirements.
1980-01-01
of Run No. 1 for Taconite Task Command ... ....... 6-41 6.22d Results of Run No. I for Oil Can Task Command ........ ... 6-42 6.22e Results of Run No...Port and Period for Run No. 2 ... .. ... ... 6-47 6.23c Results of Run No. 2 for Taconite Task Command ... ....... 6-48 6.23d Results of Run No. 2 for...6-53 6.24b Predicted Icebreaker Fleet by Home Port and Period for Run No. 3 6-54 6.24c Results of Run No. 3 for Taconite Task Command. ....... 6
Display-wide influences on figure-ground perception: the case of symmetry.
Mojica, Andrew J; Peterson, Mary A
2014-05-01
Past research has demonstrated that convex regions are increasingly likely to be perceived as figures as the number of alternating convex and concave regions in test displays increases. This region-number effect depends on both a small preexisting preference for convex over concave objects and the presence of scene characteristics (i.e., uniform fill) that allow the integration of the concave regions into a background object/surface. These factors work together to enable the percept of convex objects in front of a background. We investigated whether region-number effects generalize to another property, symmetry, whose effectiveness as a figure property has been debated. Observers reported which regions they perceived as figures in black-and-white displays with alternating symmetric/asymmetric regions. In Experiments 1 and 2, the displays had articulated outer borders that preserved the symmetry/asymmetry of the outermost regions. Region-number effects were not observed, although symmetric regions were perceived as figures more often than chance. We hypothesized that the articulated outer borders prevented fitting a background interpretation to the asymmetric regions. In Experiment 3, we used straight-edge framelike outer borders and observed region-number effects for symmetry equivalent to those observed for convexity. These results (1) show that display-wide information affects figure assignment at a border, (2) extend the evidence indicating that the ability to fit background as well as foreground interpretations is critical in figure assignment, (3) reveal that symmetry and convexity are equally effective figure cues and, (4) demonstrate that symmetry serves as a figural property only when it is close to fixation.
Automated Laser Cutting In Three Dimensions
NASA Technical Reports Server (NTRS)
Bird, Lisa T.; Yvanovich, Mark A.; Angell, Terry R.; Bishop, Patricia J.; Dai, Weimin; Dobbs, Robert D.; He, Mingli; Minardi, Antonio; Shelton, Bret A.
1995-01-01
Computer-controlled machine-tool system uses laser beam assisted by directed flow of air to cut refractory materials into complex three-dimensional shapes. Velocity, position, and angle of cut varied. In original application, materials in question were thermally insulating thick blankets and tiles used on space shuttle. System shapes tile to concave or convex contours and cuts beveled edges on blanket, without cutting through outer layer of quartz fabric part of blanket. For safety, system entirely enclosed to prevent escape of laser energy. No dust generated during cutting operation - all material vaporized; larger solid chips dislodged from workpiece easily removed later.
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
The rid-redundant procedure in C-Prolog
NASA Technical Reports Server (NTRS)
Chen, Huo-Yan; Wah, Benjamin W.
1987-01-01
C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
A Functional Description of the Geophysical Data Acquisition System
1990-08-10
less than 50 SPS nor greater than 250 SPS 3.0 SENSORS/TRANSDUCERS 3.1 CHAPTER OVERVIEW Most of the research supported by GDAS has primarily involved two...signal for the computer. The SRUN signal from the computer is fed to a retriggerable oneshot multivibrator on the board. SRUN consists of a pulse train...that is present when the computer is running. The oneshot output drives the RUN lamp on the front panel. Finally, one pin on the board edge connector is
Network support for system initiated checkpoints
Chen, Dong; Heidelberger, Philip
2013-01-29
A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.
Convergence properties of simple genetic algorithms
NASA Technical Reports Server (NTRS)
Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.
1974-01-01
The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.
Modeling Subsurface Reactive Flows Using Leadership-Class Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Richard T; Hammond, Glenn; Lichtner, Peter
2009-01-01
We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.
Williams, Paul T
2012-01-01
Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.
A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software
NASA Astrophysics Data System (ADS)
Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.
2017-10-01
Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
NASA Astrophysics Data System (ADS)
Varela Rodriguez, F.
2011-12-01
The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme
NASA Astrophysics Data System (ADS)
Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.
2017-10-01
LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.
NASA Technical Reports Server (NTRS)
Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz
2012-01-01
This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Evaluating the Efficacy of the Cloud for Cluster Computation
NASA Technical Reports Server (NTRS)
Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom
2012-01-01
Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.
Compliant tactile sensor for generating a signal related to an applied force
NASA Technical Reports Server (NTRS)
Torres-Jara, Eduardo (Inventor)
2012-01-01
Tactile sensor. The sensor includes a compliant convex surface disposed above a sensor array, the sensor array adapted to respond to deformation of the convex surface to generate a signal related to an applied force vector.
Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints
NASA Astrophysics Data System (ADS)
Sun, Chao; Hu, Guoqiang
2018-05-01
In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.
Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha
2017-03-01
This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.
The role of spinal concave–convex biases in the progression of idiopathic scoliosis
Driscoll, Mark; Moreau, Alain; Villemure, Isabelle; Parent, Stefan
2009-01-01
Inadequate understanding of risk factors involved in the progression of idiopathic scoliosis restrains initial treatment to observation until the deformity shows signs of significant aggravation. The purpose of this analysis is to explore whether the concave–convex biases associated with scoliosis (local degeneration of the intervertebral discs, nucleus migration, and local increase in trabecular bone-mineral density of vertebral bodies) may be identified as progressive risk factors. Finite element models of a 26° right thoracic scoliotic spine were constructed based on experimental and clinical observations that included growth dynamics governed by mechanical stimulus. Stress distribution over the vertebral growth plates, progression of Cobb angles, and vertebral wedging were explored in models with and without the biases of concave–convex properties. The inclusion of the bias of concave–convex properties within the model both augmented the asymmetrical loading of the vertebral growth plates by up to 37% and further amplified the progression of Cobb angles and vertebral wedging by as much as 5.9° and 0.8°, respectively. Concave–convex biases are factors that influence the progression of scoliotic curves. Quantifying these parameters in a patient with scoliosis may further provide a better clinical assessment of the risk of progression. PMID:19130096
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Jiao, Yang
2012-07-01
We have recently devised organizing principles to obtain maximally dense packings of the Platonic and Archimedean solids and certain smoothly shaped convex nonspherical particles [Torquato and Jiao, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.041310 81, 041310 (2010)]. Here we generalize them in order to guide one to ascertain the densest packings of other convex nonspherical particles as well as concave shapes. Our generalized organizing principles are explicitly stated as four distinct propositions. All of our organizing principles are applied to and tested against the most comprehensive set of both convex and concave particle shapes examined to date, including Catalan solids, prisms, antiprisms, cylinders, dimers of spheres, and various concave polyhedra. We demonstrate that all of the densest known packings associated with this wide spectrum of nonspherical particles are consistent with our propositions. Among other applications, our general organizing principles enable us to construct analytically the densest known packings of certain convex nonspherical particles, including spherocylinders, “lens-shaped” particles, square pyramids, and rhombic pyramids. Moreover, we show how to apply these principles to infer the high-density equilibrium crystalline phases of hard convex and concave particles. We also discuss the unique packing attributes of maximally random jammed packings of nonspherical particles.
Convex Formulations of Learning from Crowds
NASA Astrophysics Data System (ADS)
Kajino, Hiroshi; Kashima, Hisashi
It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.
NASA Astrophysics Data System (ADS)
Pinson, Robin Marie
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.
Controlling Laboratory Processes From A Personal Computer
NASA Technical Reports Server (NTRS)
Will, H.; Mackin, M. A.
1991-01-01
Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.
Supersonic Love waves in strong piezoelectrics of symmetry mm2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darinskii, A. N.; Weihnacht, M.
A study has been made of the Love wave propagation on piezoelectric substrates of symmetry mm2. It has been shown that under certain conditions the velocity of the Love wave exceeds that of shear horizontal (SH) bulk waves in the substrate. This occurs when the slowness curve of SH bulk waves in the substrate either has a concavity or is convex with nearly zero curvature. For such {open_quotes}supersonic{close_quotes} Love waves to appear, it is also required that the substrate as well as the layer be specially oriented and that their material constants fulfill a number of inequalities. Numerical computations havemore » been carried out for a number of structures. The results of numerical computations have been compared with approximate analytical estimations. {copyright} 2001 American Institute of Physics.« less
Adaptive zooming in X-ray computed tomography.
Dabravolski, Andrei; Batenburg, Kees Joost; Sijbers, Jan
2014-01-01
In computed tomography (CT), the source-detector system commonly rotates around the object in a circular trajectory. Such a trajectory does not allow to exploit a detector fully when scanning elongated objects. Increase the spatial resolution of the reconstructed image by optimal zooming during scanning. A new approach is proposed, in which the full width of the detector is exploited for every projection angle. This approach is based on the use of prior information about the object's convex hull to move the source as close as possible to the object, while avoiding truncation of the projections. Experiments show that the proposed approach can significantly improve reconstruction quality, producing reconstructions with smaller errors and revealing more details in the object. The proposed approach can lead to more accurate reconstructions and increased spatial resolution in the object compared to the conventional circular trajectory.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state ismore » developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.« less
A physics-motivated Centroidal Voronoi Particle domain decomposition method
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
WinHPC System Programming | High-Performance Computing | NREL
Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Computational Analysis of the G-III Laminar Flow Glove
NASA Technical Reports Server (NTRS)
Malik, Mujeeb R.; Liao, Wei; Lee-Rausch, Elizabeth M.; Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan
2011-01-01
Under NASA's Environmentally Responsible Aviation Project, flight experiments are planned with the primary objective of demonstrating the Discrete Roughness Elements (DRE) technology for passive laminar flow control at chord Reynolds numbers relevant to transport aircraft. In this paper, we present a preliminary computational assessment of the Gulfstream-III (G-III) aircraft wing-glove designed to attain natural laminar flow for the leading-edge sweep angle of 34.6deg. Analysis for a flight Mach number of 0.75 shows that it should be possible to achieve natural laminar flow for twice the transition Reynolds number ever achieved at this sweep angle. However, the wing-glove needs to be redesigned to effectively demonstrate passive laminar flow control using DREs. As a by-product of the computational assessment, effect of surface curvature on stationary crossflow disturbances is found to be strongly stabilizing for the current design, and it is suggested that convex surface curvature could be used as a control parameter for natural laminar flow design, provided transition occurs via stationary crossflow disturbances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Computer-based testing of the modified essay question: the Singapore experience.
Lim, Erle Chuen-Hian; Seet, Raymond Chee-Seong; Oh, Vernon M S; Chia, Boon-Lock; Aw, Marion; Quak, Seng-Hock; Ong, Benjamin K C
2007-11-01
The modified essay question (MEQ), featuring an evolving case scenario, tests a candidate's problem-solving and reasoning ability, rather than mere factual recall. Although it is traditionally conducted as a pen-and-paper examination, our university has run the MEQ using computer-based testing (CBT) since 2003. We describe our experience with running the MEQ examination using the IVLE, or integrated virtual learning environment (https://ivle.nus.edu.sg), provide a blueprint for universities intending to conduct computer-based testing of the MEQ, and detail how our MEQ examination has evolved since its inception. An MEQ committee, comprising specialists in key disciplines from the departments of Medicine and Paediatrics, was formed. We utilized the IVLE, developed for our university in 1998, as the online platform on which we ran the MEQ. We calculated the number of man-hours (academic and support staff) required to run the MEQ examination, using either a computer-based or pen-and-paper format. With the support of our university's information technology (IT) specialists, we have successfully run the MEQ examination online, twice a year, since 2003. Initially, we conducted the examination with short-answer questions only, but have since expanded the MEQ examination to include multiple-choice and extended matching questions. A total of 1268 man-hours was spent in preparing for, and running, the MEQ examination using CBT, compared to 236.5 man-hours to run it using a pen-and-paper format. Despite being more labour-intensive, our students and staff prefer CBT to the pen-and-paper format. The MEQ can be conducted using a computer-based testing scenario, which offers several advantages over a pen-and-paper format. We hope to increase the number of questions and incorporate audio and video files, featuring clinical vignettes, to the MEQ examination in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopp, H.J.; Mortensen, G.A.
1978-04-01
Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Water resources planning and management : A stochastic dual dynamic programming approach
NASA Astrophysics Data System (ADS)
Goor, Q.; Pinte, D.; Tilmant, A.
2008-12-01
Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.
Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.
Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe
2015-01-15
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Delbo, Marco; Ali-Lagoa, Victor; Carry, Benoit
2014-11-01
Convex shape models and spin vectors of asteroids are now routinely derived from their disk-integrated lightcurves by the lightcurve inversion method of Kaasalainen et al. (2001, Icarus 153, 37). These shape models can be then used in combination with thermal infrared data and a thermophysical model to derive other physical parameters - size, albedo, macroscopic roughness and thermal inertia of the surface. In this classical two-step approach, the shape and spin parameters are kept fixed during the thermophysical modeling when the emitted thermal flux is computed from the surface temperature, which is computed by solving a 1-D heat diffusion equation in sub-surface layers. A novel method of simultaneous inversion of optical and infrared data was presented by Durech et al. (2012, LPI Contribution No. 1667, id.6118). The new algorithm uses the same convex shape representation as the lightcurve inversion but optimizes all relevant physical parameters simultaneously (including the shape, size, rotation vector, thermal inertia, albedo, surface roughness, etc.), which leads to a better fit to the thermal data and a reliable estimation of model uncertainties. We applied this method to selected asteroids using their optical lightcurves from archives and thermal infrared data observed by the Wide-field Infrared Survey Explorer (WISE) satellite. We will (i) show several examples of how well our model fits both optical and infrared data, (ii) discuss the uncertainty of derived parameters (namely the thermal inertia), (iii) compare results obtained with the two-step approach with those obtained by our method, (iv) discuss the advantages of this simultaneous approach with respect to the classical two-step approach, and (v) advertise the possibility to use this approach to tens of thousands asteroids for which enough WISE and optical data exist.
Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.
Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P
2010-01-15
A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.
Katz, Jonathan E
2017-01-01
Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.
Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters
Torres-Huitzil, Cesar
2013-01-01
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456
Energy Frontier Research With ATLAS: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, John; Black, Kevin; Ahlen, Steve
2016-06-14
The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less
Automatic Data Filter Customization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.
Open-source meteor detection software for low-cost single-board computers
NASA Astrophysics Data System (ADS)
Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.
2016-01-01
This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.
How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing
NASA Astrophysics Data System (ADS)
Decyk, V. K.; Dauger, D. E.
We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
Manual for automatic generation of finite element models of spiral bevel gears in mesh
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Reddy, S.; Kumar, A.
1994-01-01
The goal of this research is to develop computer programs that generate finite element models suitable for doing 3D contact analysis of faced milled spiral bevel gears in mesh. A pinion tooth and a gear tooth are created and put in mesh. There are two programs: Points.f and Pat.f to perform the analysis. Points.f is based on the equation of meshing for spiral bevel gears. It uses machine tool settings to solve for an N x M mesh of points on the four surfaces, pinion concave and convex, and gear concave and convex. Points.f creates the file POINTS.OUT, an ASCI file containing N x M points for each surface. (N is the number of node points along the length of the tooth, and M is nodes along the height.) Pat.f reads POINTS.OUT and creates the file tl.out. Tl.out is a series of PATRAN input commands. In addition to the mesh density on the tooth face, additional user specified variables are the number of finite elements through the thickness, and the number of finite elements along the tooth full fillet. A full fillet is assumed to exist for both the pinion and gear.
Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron
2014-04-01
We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.
On Data Transfers Over Wide-Area Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang
Dedicated wide-area network connections are employed in big data and high-performance computing scenarios, since the absence of cross-traffic promises to make it easier to analyze and optimize data transfers over them. However, nonlinear transport dynamics and end-system complexity due to multi-core hosts and distributed file systems make these tasks surprisingly challenging. We present an overview of methods to analyze memory and disk file transfers using extensive measurements over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory transfers, we derive performance profiles of TCP and UDT throughput as a function of RTT, which showmore » concave regions in contrast to entirely convex regions predicted by previous models. These highly desirable concave regions can be expanded by utilizing large buffers and more parallel flows. We also present Poincar´e maps and Lyapunov exponents of TCP and UDT throughputtraces that indicate complex throughput dynamics. For disk file transfers, we show that throughput can be optimized using a combination of parallel I/O and network threads under direct I/O mode. Our initial throughput measurements of Lustre filesystems mounted over long-haul connections using LNet routers show convex profiles indicative of I/O limits.« less
Ji, Haoran; Wang, Chengshan; Li, Peng; ...
2017-09-20
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
NASA Technical Reports Server (NTRS)
Barth, Timothy; Saini, Subhash (Technical Monitor)
1999-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images
NASA Astrophysics Data System (ADS)
Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.
2017-03-01
Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Haoran; Wang, Chengshan; Li, Peng
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
An efficient self-organizing map designed by genetic algorithms for the traveling salesman problem.
Jin, Hui-Dong; Leung, Kwong-Sak; Wong, Man-Leung; Xu, Z B
2003-01-01
As a typical combinatorial optimization problem, the traveling salesman problem (TSP) has attracted extensive research interest. In this paper, we develop a self-organizing map (SOM) with a novel learning rule. It is called the integrated SOM (ISOM) since its learning rule integrates the three learning mechanisms in the SOM literature. Within a single learning step, the excited neuron is first dragged toward the input city, then pushed to the convex hull of the TSP, and finally drawn toward the middle point of its two neighboring neurons. A genetic algorithm is successfully specified to determine the elaborate coordination among the three learning mechanisms as well as the suitable parameter setting. The evolved ISOM (eISOM) is examined on three sets of TSP to demonstrate its power and efficiency. The computation complexity of the eISOM is quadratic, which is comparable to other SOM-like neural networks. Moreover, the eISOM can generate more accurate solutions than several typical approaches for TSP including the SOM developed by Budinich, the expanding SOM, the convex elastic net, and the FLEXMAP algorithm. Though its solution accuracy is not yet comparable to some sophisticated heuristics, the eISOM is one of the most accurate neural networks for the TSP.
Directional Convexity and Finite Optimality Conditions.
1984-03-01
system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United
Localized Multiple Kernel Learning A Convex Approach
2016-11-22
data. All the aforementioned approaches to localized MKL are formulated in terms of non-convex optimization problems, and deep the- oretical...learning. IEEE Transactions on Neural Networks, 22(3):433–446, 2011. Jingjing Yang, Yuanning Li, Yonghong Tian, Lingyu Duan, and Wen Gao. Group-sensitive
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
NASA Astrophysics Data System (ADS)
Solov'ev, V. A.; Chernov, M. Yu; Baidakova, M. V.; Kirilenko, D. A.; Yagovkina, M. A.; Sitnikova, A. A.; Komissarova, T. A.; Kop'ev, P. S.; Ivanov, S. V.
2018-01-01
This paper presents a study of structural properties of InGaAs/InAlAs quantum well (QW) heterostructures with convex-graded InxAl1-xAs (x = 0.05-0.79) metamorphic buffer layers (MBLs) grown by molecular beam epitaxy on GaAs substrates. Mechanisms of elastic strain relaxation in the convex-graded MBLs were studied by the X-ray reciprocal space mapping combined with the data of spatially-resolved selected area electron diffraction implemented in a transmission electron microscope. The strain relaxation degree was approximated for the structures with different values of an In step-back. Strong contribution of the strain relaxation via lattice tilt in addition to the formation of the misfit dislocations has been observed for the convex-graded InAlAs MBL, which results in a reduced threading dislocation density in the QW region as compared to a linear-graded MBL.
Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties
NASA Astrophysics Data System (ADS)
Lazzaro, D.; Loli Piccolomini, E.; Zama, F.
2016-10-01
This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.
Liquid phase heteroepitaxial growth on convex substrate using binary phase field crystal model
NASA Astrophysics Data System (ADS)
Lu, Yanli; Zhang, Tinghui; Chen, Zheng
2018-06-01
The liquid phase heteroepitaxial growth on convex substrate is investigated with the binary phase field crystal (PFC) model. The paper aims to focus on the transformation of the morphology of epitaxial films on convex substrate with two different radiuses of curvature (Ω) as well as influences of substrate vicinal angles on films growth. It is found that films growth experience different stages on convex substrate with different radiuses of curvature (Ω). For Ω = 512 Δx , the process of epitaxial film growth includes four stages: island coupled with layer-by-layer growth, layer-by-layer growth, island coupled with layer-by-layer growth, layer-by-layer growth. For Ω = 1024 Δx , film growth only experience islands growth and layer-by-layer growth. Also, substrate vicinal angle (π) is an important parameter for epitaxial film growth. We find the film can grow well when π = 2° for Ω = 512 Δx , while the optimized film can be obtained when π = 4° for Ω = 512 Δx .
QUADRATIC SERENDIPITY FINITE ELEMENTS ON POLYGONS USING GENERALIZED BARYCENTRIC COORDINATES
RAND, ALEXANDER; GILLETTE, ANDREW; BAJAJ, CHANDRAJIT
2013-01-01
We introduce a finite element construction for use on the class of convex, planar polygons and show it obtains a quadratic error convergence estimate. On a convex n-gon, our construction produces 2n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, by transforming and combining a set of n(n + 1)/2 basis functions known to obtain quadratic convergence. The technique broadens the scope of the so-called ‘serendipity’ elements, previously studied only for quadrilateral and regular hexahedral meshes, by employing the theory of generalized barycentric coordinates. Uniform a priori error estimates are established over the class of convex quadrilaterals with bounded aspect ratio as well as over the class of convex planar polygons satisfying additional shape regularity conditions to exclude large interior angles and short edges. Numerical evidence is provided on a trapezoidal quadrilateral mesh, previously not amenable to serendipity constructions, and applications to adaptive meshing are discussed. PMID:25301974
Torsional deformity of apical vertebra in adolescent idiopathic scoliosis.
Kotwicki, Tomasz; Napiontek, Marek
2002-01-01
CT scans of structural thoracic idiopathic scoliosis were reviewed in nine patients admitted to our department for scoliosis surgery. The apical vertebra scans were chosen and the following parameters were evaluated: 1) alpha angle formed by the axis of vertebra and the axis of spinous process 2) beta concave and beta convex angle between the spinous process and the left and right transverse process, respectively, 3) gamma concave and gamma convex angle between the axis of vertebra and the left and right transverse process, respectively, 4) the rotation angle to the sagittal plane. The constant deviation of the spinous process towards the convex side of the curve was observed. The vertebral body itself was distorted towards the concavity of the curve. The angle between the spinous process and the transverse process was smaller on the convex side of the curve. The torsional, intravertebral deformity of the apical vertebra was a factor acting in the direction opposite to the rotation, in the sense to reduce the deformity of the spine in idiopathic scoliosis.
NASA Astrophysics Data System (ADS)
Peterson, Jeffrey H.; Derby, Jeffrey J.
2017-06-01
A unifying idea is presented for the engineering of convex melt-solid interface shapes in Bridgman crystal growth systems. Previous approaches to interface control are discussed with particular attention paid to the idea of a "booster" heater. Proceeding from the idea that a booster heater promotes a converging heat flux geometry and from the energy conservation equation, we show that a convex interface shape will naturally result when the interface is located in regions of the furnace where the axial thermal profile exhibits negative curvature, i.e., where d2 T / dz2 < 0 . This criterion is effective in explaining prior literature results on interface control and promising for the evaluation of new furnace designs. We posit that the negative curvature criterion may be applicable to the characterization of growth systems via temperature measurements in an empty furnace, providing insight about the potential for achieving a convex interface shape, without growing a crystal or conducting simulations.
New Convex and Spherical Structures of Bare Boron Clusters
NASA Astrophysics Data System (ADS)
Boustani, Ihsan
1997-10-01
New stable structures of bare boron clusters can easily be obtained and constructed with the help of an "Aufbau Principle" suggested by a systematicab initioHF-SCF and direct CI study. It is concluded that boron cluster formation can be established by elemental units of pentagonal and hexagonal pyramids. New convex and small spherical clusters different from the classical known forms of boron crystal structures are obtained by a combination of both basic units. Convex structures simulate boron surfaces which can be considered as segments of open or closed spheres. Both convex clusters B16and B46have energies close to those of their conjugate quasi-planar clusters, which are relatively stable and can be considered to act as a calibration mark. The closed spherical clusters B12, B22, B32, and B42are less stable than the corresponding conjugated quasi-planar structures. As a consequence, highly stable spherical boron clusters can systematically be predicted when their conjugate quasi-planar clusters are determined and energies are compared.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1977-07-18
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1976-10-07
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1975-06-02
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)
Scaling of Convex Hull Volume to Body Mass in Modern Primates, Non-Primate Mammals and Birds
Brassey, Charlotte A.; Sellers, William I.
2014-01-01
The volumetric method of ‘convex hulling’ has recently been put forward as a mass prediction technique for fossil vertebrates. Convex hulling involves the calculation of minimum convex hull volumes (vol CH) from the complete mounted skeletons of modern museum specimens, which are subsequently regressed against body mass (M b) to derive predictive equations for extinct species. The convex hulling technique has recently been applied to estimate body mass in giant sauropods and fossil ratites, however the biomechanical signal contained within vol CH has remained unclear. Specifically, when vol CH scaling departs from isometry in a group of vertebrates, how might this be interpreted? Here we derive predictive equations for primates, non-primate mammals and birds and compare the scaling behaviour of M b to vol CH between groups. We find predictive equations to be characterised by extremely high correlation coefficients (r 2 = 0.97–0.99) and low mean percentage prediction error (11–20%). Results suggest non-primate mammals scale body mass to vol CH isometrically (b = 0.92, 95%CI = 0.85–1.00, p = 0.08). Birds scale body mass to vol CH with negative allometry (b = 0.81, 95%CI = 0.70–0.91, p = 0.011) and apparent density (vol CH/M b) therefore decreases with mass (r 2 = 0.36, p<0.05). In contrast, primates scale body mass to vol CH with positive allometry (b = 1.07, 95%CI = 1.01–1.12, p = 0.05) and apparent density therefore increases with size (r 2 = 0.46, p = 0.025). We interpret such departures from isometry in the context of the ‘missing mass’ of soft tissues that are excluded from the convex hulling process. We conclude that the convex hulling technique can be justifiably applied to the fossil record when a large proportion of the skeleton is preserved. However we emphasise the need for future studies to quantify interspecific variation in the distribution of soft tissues such as muscle, integument and body fat. PMID:24618736
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2012-01-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.
Impact of trailing edge shape on the wake and propulsive performance of pitching panels
NASA Astrophysics Data System (ADS)
Van Buren, T.; Floryan, D.; Brunner, D.; Senturk, U.; Smits, A. J.
2017-01-01
The effects of changing the trailing edge shape on the wake and propulsive performance of a pitching rigid panel are examined experimentally. The panel aspect ratio is AR=1 , and the trailing edges are symmetric chevron shapes with convex and concave orientations of varying degree. Concave trailing edges delay the natural vortex bending and compression of the wake, and the mean streamwise velocity field contains a single jet. Conversely, convex trailing edges promote wake compression and produce a quadfurcated wake with four jets. As the trailing edge shape changes from the most concave to the most convex, the thrust and efficiency increase significantly.
A Convex Approach to Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)
2002-01-01
The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.
Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization
NASA Technical Reports Server (NTRS)
Pinson, Robin; Lu, Ping
2015-01-01
This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.
Relaxation in control systems of subdifferential type
NASA Astrophysics Data System (ADS)
Tolstonogov, A. A.
2006-02-01
In a separable Hilbert space we consider a control system with evolution operators that are subdifferentials of a proper convex lower semicontinuous function depending on time. The constraint on the control is given by a multivalued function with non-convex values that is lower semicontinuous with respect to the variable states. Along with the original system we consider the system in which the constraint on the control is the upper semicontinuous convex-valued regularization of the original constraint. We study relations between the solution sets of these systems. As an application we consider a control variational inequality. We give an example of a control system of parabolic type with an obstacle.
Density of convex intersections and applications
Rautenberg, C. N.; Rösel, S.
2017-01-01
In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301
Reducing the duality gap in partially convex programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Correa, R.
1994-12-31
We consider the non-linear minimization program {alpha} = min{sub z{element_of}D, x{element_of}C}{l_brace}f{sub 0}(z, x) : f{sub i}(z, x) {<=} 0, i {element_of} {l_brace}1, ..., m{r_brace}{r_brace} where f{sub i}(z, {center_dot}) are convex functions, C is convex and D is compact. Following Ben-Tal, Eiger and Gershowitz we prove the existence of a partial dual program whose optimum is arbitrarily close to {alpha}. The idea, corresponds to the branching principle in Branch and Bound methods. We describe such a kind of algorithm for obtaining the desired partial dual.
On the polarizability dyadics of electrically small, convex objects
NASA Astrophysics Data System (ADS)
Lakhtakia, Akhlesh
1993-11-01
This communication on the polarizability dyadics of electrically small objects of convex shapes has been prompted by a recent paper published by Sihvola and Lindell on the polarizability dyadic of an electrically gyrotropic sphere. A mini-review of recent work on polarizability dyadics is appended.
NASA Astrophysics Data System (ADS)
Set, Erhan; Özdemir, M. Emin; Alan, E. Aykan
2017-04-01
In this article, by using the Hölder's inequality and power mean inequality the authors establish several inequalities of Hermite-Hadamard type for n- time differentiable quasi-convex functions and P- functions involving Riemann-Liouville fractional integrals.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
ATLAS@Home: Harnessing Volunteer Computing for HEP
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration
2015-12-01
A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.
Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvijotham, Krishnamurthy; Low, Steven; Chertkov, Michael
2015-01-12
Power systems are undergoing unprecedented transformations with increased adoption of renewables and distributed generation, as well as the adoption of demand response programs. All of these changes, while making the grid more responsive and potentially more efficient, pose significant challenges for power systems operators. Conventional operational paradigms are no longer sufficient as the power system may no longer have big dispatchable generators with sufficient positive and negative reserves. This increases the need for tools and algorithms that can efficiently predict safe regions of operation of the power system. In this paper, we study energy functions as a tool to designmore » algorithms for various operational problems in power systems. These have a long history in power systems and have been primarily applied to transient stability problems. In this paper, we take a new look at power systems, focusing on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex in these variables. We show that this corresponds naturally with standard operational constraints imposed in power systems. We show that power of equations can be solved using this approach, as long as the solution lies within the convexity domain. We outline various desirable properties of solutions in the convexity domain and present simple numerical illustrations supporting our results.« less
L 1-2 minimization for exact and stable seismic attenuation compensation
NASA Astrophysics Data System (ADS)
Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang
2018-06-01
Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...
2015-02-19
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less