For comprehensive and current results, perform a real-time search at Science.gov.

1

Parallel solver for trajectory optimization search directions

NASA Technical Reports Server (NTRS)

A key algorithmic element of a real-time trajectory optimization hardware/software implementation is presented, the search step solver. This is one piece of an algorithm whose overall goal is to make nonlinear trajectory optimization fast enough to provide real-time commands during guidance of a vehicle such as an aeromaneuvering orbiter or the National Aerospace Plane. Many methods of nonlinear programming require the solution of a quadratic program (QP) at each iteration to determine the search step. In the trajectory optimization case, the QP has a special dynamic programming structure. The algorithm exploits this special structure with a divide- and conquer type of parallel implementation. The algorithm solves a (p.N)-stage problem on N processors in O(p + log2 N) operations. The algorithm yields a factor of 8 speed-up over the fastest known serial algorithm when solving a 1024-stage test problem on 32 processors.

Psiaki, M. L.; Park, K. H.

1992-01-01

2

We present a survey of parallel local search algorithms in which we review the concepts that can be used to incorporate parallelism\\u000a into local search. For this purpose we distinguish between single-walk and multiple-walk parallel local search and between\\u000a asynchronous and synchronous parallelism. Within the class of single-walk algorithms we differentiate between multiple-step\\u000a and single-step parallelism. To describe parallel local

M. G. A. Verhoeven; E. H. L. Aarts

1995-01-01

3

Efficiency of parallel direct optimization

NASA Technical Reports Server (NTRS)

Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

Janies, D. A.; Wheeler, W. C.

2001-01-01

4

Parallel Search Algorithm for Geometric Constraints Solving

In this paper, we propose a hybrid algorithm -(parallel search algorithm) to solve geometric constraint problems. First, particle swarm optimization is employed to gain parallelization while solution diversity is maintained. Second, simplex method reduces the number of infeasible solutions while solution quality is improved with an operation order search. Performance results on geometric constraint problems show that parallel search algorithm

Kong Zhao; Hua Yuan; Wenhui Li; Rongqin Yi

2007-01-01

5

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION PATRICIA D. HOUGH, TAMARA G. KOLDA. 1, pp. 134156 Abstract. We introduce a new asynchronous parallel pattern search (APPS). Parallel pattern search can be quite useful for engineering optimization problems characterized by a small number

Kolda, Tamara G.

6

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION #

ASYNCHRONOUS PARALLEL PATTERN SEARCH FOR NONLINEAR OPTIMIZATION # PATRICIA D. HOUGH + , TAMARA G Mathematics Vol. 23, No. 1, pp. 134--156 Abstract. We introduce a new asynchronous parallel pattern search (APPS). Parallel pattern search can be quite useful for engineering optimization problems characterized

Kolda, Tamara G.

7

Directed Graphs digraph search

1 Directed Graphs digraph search transitive closure topological sort strong components References (digraphs) Set of objects with oriented pairwise connections. Page ranks with histogram for a larger example;3 Digraph applications digraph vertex edge financial stock, currency transaction transportation street

Sedgewick, Robert

8

Parallel Search Algorithm for Geometric Constraints Solving

We propose a hybrid algorithm – (Parallel Search Algorithm) between PSO and simplex methods to approximate optimal solution\\u000a for the Geometric Constraint problems. Locally, simplex is extended to reduce the number of infeasible solutions while solution\\u000a quality is improved with an operation order search. Globally, PSO is employed to gain parallelization while solution diversity\\u000a is maintained. Performance results on Geometric

Hua Yuan; Wenhui Li; Kong Zhao; Rongqin Yi

2007-01-01

9

Asynchronous parallel pattern search for nonlinear optimization

Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

P. D. Hough; T. G. Kolda; V. J. Torczon

2000-01-01

10

HOPSPACK: Hybrid Optimization Parallel Search Package.

In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

2008-12-01

11

PARALLEL GREEDY RANDOMIZED ADAPTIVE SEARCH ...

Dec 6, 2004 ... but sub-linear for the job shop scheduling problem. .... superimposed variability information is used to analyze the straightness of the Q-Q plots. Aiex and Resende .... of version 1.2 of the Message-Passing Interface (MPI) [65] specification. ..... a description of the implementation of the local search procedure.

2004-12-06

12

Parallel Mechanisms for Visual Search in Zebrafish

Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

2014-01-01

13

Parallel Alternating-Direction Access Machine

. This paper presents a theoretical study of a model of parallelcomputations called Parallel Alternating-Direction Access Machine(padam). padam is an abstraction of the multiprocessor computers adena\\/adenart and a prototype architecture usc\\/omp. The main feature ofpadam is the organization of access to the global memory:(1) the memory modules are arranged as a 2-dimensional array,(2) each processor is assigned to a row

Bogdan S. Chlebus; Artur Czumaj; Leszek Gasieniec; Miroslaw Kowaluk; Wojciech Plandowski I

1996-01-01

14

Parallel depth first search. Part I. Implementation

This paper presents a parallel formation of depth-first search which retains the storage efficiency of sequential depth-first search and can be mapped on to any MIMD architecture. To study its effectiveness it has been implemented to solve the 15-puzzle problem on three commercially available multiprocessors - Sequent Balance 21,000, the Intel Hypercube and BBN Butterfly. The authors have been able to achieve fairly linear speedup on Sequent up to 30 processors (the maximum configuration available) and on the Intel Hypercube and BBN Butterfly up to 128 processors (the maximum configurations available). Many researchers considered the ring architecture to be quite suitable for parallel depth-first search. Their experimental results show that hypercube and shared-memory architectures are significantly better. At the heart of their parallel formulation is a dynamic work distribution scheme that divides the work between different processors. The effectiveness of the parallel formulation is strongly influenced by the work distribution scheme and architectural features such as presence/absence of shared memory, the diameter of the network, relative speed of the communication network, etc. In a companion paper, they analyze the effectiveness of different load-balancing schemes and architectures, and also present new improved work distribution schemes.

Rao, V.N.; Kumar, V.

1987-12-01

15

A Parallel VLSI Direction Finding Algorithm

NASA Astrophysics Data System (ADS)

In this paper, we present a parallel VLSI architecture that is matched to a class of direction (frequency, pole) finding algorithms of type ESPRIT. The problem is modeled in such a way that it allows an easy to partition full parallel VLSI implementation, using unitary transformations only. The hard problem, the generalized Schur decomposition of a matrix pencil, is tackled using a modified Stewart Jacobi approach that improves convergence and simplifies parameter computations. The proposed architecture is a fixed size, 2-layer Jacobi iteration array that is matched to all sub-problems of the main problem: 2 QR-factorizations, 2 SVD's and a single GSD-problem. The arithmetic used is (pipelined) Cordic.

van der Veen, Alle-Jan; Deprettere, Ed F.

1988-02-01

16

Massively Parallel Direct Simulation of Multiphase Flow

The authors understanding of multiphase physics and the associated predictive capability for multi-phase systems are severely limited by current continuum modeling methods and experimental approaches. This research will deliver an unprecedented modeling capability to directly simulate three-dimensional multi-phase systems at the particle-scale. The model solves the fully coupled equations of motion governing the fluid phase and the individual particles comprising the solid phase using a newly discovered, highly efficient coupled numerical method based on the discrete-element method and the Lattice-Boltzmann method. A massively parallel implementation will enable the solution of large, physically realistic systems.

COOK,BENJAMIN K.; PREECE,DALE S.; WILLIAMS,J.R.

2000-08-10

17

Toward a Taxonomy of Parallel Tabu Search Heuristics

In this paper we present a classification of parallel tabu search metaheuristicsbased, on the one hand, on the control and communication strategies used in thedesign of the parallel tabu search procedures and, on the other hand, on how the searchspace is partitionned. These criteria are then used to review the parallel tabu searchimplementations described in the literature. The taxonomy is

Teodor Gabriel Crainic; Michel Toulouse; Michel Gendreau

1997-01-01

18

Algorithm 856: APPSPACK 4.0: Asynchronous Parallel Pattern Search

Algorithm 856: APPSPACK 4.0: Asynchronous Parallel Pattern Search for Derivative-Free Optimization unconstrained and bound-constrained optimization problems. It implements an asynchronous parallel pattern search-free optimization, pattern search 1. INTRODUCTION APPSPACK is software for solving unconstrained and bound

Kolda, Tamara G.

19

ON THE CONVERGENCE OF ASYNCHRONOUS PARALLEL PATTERN SEARCH

ON THE CONVERGENCE OF ASYNCHRONOUS PARALLEL PATTERN SEARCH TAMARA G. KOLDA AND VIRGINIA J. TORCZON964 Abstract. In this paper we prove global convergence for asynchronous parallel pattern search. In standard pattern search, decisions regarding the update of the iterate and the step-length control parameter

Kolda, Tamara G.

20

A Library Hierarchy for Implementing Scalable Parallel Search Algorithms

A Library Hierarchy for Implementing Scalable Parallel Search Algorithms T. K. Ralphs , L. Lad libraries forming a hierarchy built on top of ALPS. The first is the Branch, Constrain, and Price Software for performing large-scale parallel search in distributed-memory computing environments. To support the devel

Ralphs, Ted

21

Generalized quantum search with parallelism Robert M. Gingrich,1

computing. In tandem with these hardware developments, there has been a parallel development of new quantum that a hybrid use of quantum computing and classical computing techniques can yield a performance that is better that connects the degree of parallelism with the expected computation time for k-parallel quantum search

Cerf, Nicolas

22

Asynchronous parallel generating set search for linearly-constrained optimization.

We describe an asynchronous parallel derivative-free algorithm for linearly-constrained optimization. Generating set search (GSS) is the basis of ourmethod. At each iteration, a GSS algorithm computes a set of search directionsand corresponding trial points and then evaluates the objective function valueat each trial point. Asynchronous versions of the algorithm have been developedin the unconstrained and bound-constrained cases which allow the iterations tocontinue (and new trial points to be generated and evaluated) as soon as anyother trial point completes. This enables better utilization of parallel resourcesand a reduction in overall runtime, especially for problems where the objec-tive function takes minutes or hours to compute. For linearly-constrained GSS,the convergence theory requires that the set of search directions conform to the3 nearby boundary. The complexity of developing the asynchronous algorithm forthe linearly-constrained case has to do with maintaining a suitable set of searchdirections as the search progresses and is the focus of this research. We describeour implementation in detail, including how to avoid function evaluations bycaching function values and using approximate look-ups. We test our imple-mentation on every CUTEr test problem with general linear constraints and upto 1000 variables. Without tuning to individual problems, our implementationwas able to solve 95% of the test problems with 10 or fewer variables, 75%of the problems with 11-100 variables, and nearly half of the problems with100-1000 variables. To the best of our knowledge, these are the best resultsthat have ever been achieved with a derivative-free method. Our asynchronousparallel implementation is freely available as part of the APPSPACK software.4

Kolda, Tamara G.; Griffin, Joshua; Lewis, Robert Michael

2007-04-01

23

Performance Analysis of Two Parallel Game-Tree Search Applications

Game-tree search plays an important role in the field of artificial intelligence. In this paper we analyze scalability performance\\u000a of two parallel game-tree search applications in chess on two shared-memory multiprocessor systems. One is a recently-proposed\\u000a Parallel Randomized Best-First Minimax search algorithm (PRBFM) in a chess-playing program, and the other is Crafty, a state-of-the-art\\u000a alpha-beta-based chess-playing program. The analysis shows

Yurong Chen; Ying Tan; Yimin Zhang; Carole Dulong

2006-01-01

24

Dark matter is hypothetical matter which does not interact with electromagnetic radiation. The existence of dark matter is only inferred from gravitational effects of astrophysical observations to explain the missing mass component of the Universe. Weakly Interacting Massive Particles are currently the most popular candidate to explain the missing mass component. I review the current status of experimental searches of dark matter through direct detection using terrestrial detectors.

Yoo, Jonghee; /Fermilab

2009-12-01

25

Seth Lemons (UNH) Parallel Best-First Search: Optimal and Suboptimal Solutions 1 / 39 Parallel Best-First Search: Optimal and Suboptimal Solutions Ethan Burns, Seth Lemons, Wheeler Ruml Rong Zhou Lemons (UNH) Parallel Best-First Search: Optimal and Suboptimal Solutions 2 / 39 Now we

Ruml, Wheeler

26

Parallel/distributed direct method for solving linear systems

NASA Technical Reports Server (NTRS)

A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.

Lin, Avi

1990-01-01

27

Series-parallel method of direct solar array regulation

NASA Technical Reports Server (NTRS)

A 40 watt experimental solar array was directly regulated by shorting out appropriate combinations of series and parallel segments of a solar array. Regulation switches were employed to control the array at various set-point voltages between 25 and 40 volts. Regulation to within + or - 0.5 volt was obtained over a range of solar array temperatures and illumination levels as an active load was varied from open circuit to maximum available power. A fourfold reduction in regulation switch power dissipation was achieved with series-parallel regulation as compared to the usual series-only switching for direct solar array regulation.

Gooder, S. T.

1976-01-01

28

A parallelization of the row-searching algorithm

NASA Astrophysics Data System (ADS)

The problem dealt in this paper concerns the parallelization of the row-searching algorithm which allows the search for linearly dependant rows on a given matrix and its implementation on MPI (Message Passing Interface) environment. This algorithm is largely used in control theory and more specifically in solving the famous diophantine equation. An introduction to the diophantine equation is presented, then two parallelization approaches of the algorithm are detailed. The first distributes a set of rows on processes (processors) and the second makes a distribution per blocks. The sequential algorithm and its two parallel forms are implemented using MPI routines, then modelled using UML (Unified Modelling Language) and finally evaluated using algorithmic complexity.

Yaici, Malika; Khaled, Hayet; Khaled, Zakia; Bentahar, Athmane

2012-11-01

29

Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

NASA Astrophysics Data System (ADS)

Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

2009-11-01

30

Multi-directional local search

This paper introduces multi-directional local search, a metaheuristic for multi-objective optimization. We first motivate the method and present an algorithmic framework for it. We then apply it to several known multi-objective problems such as the multi-objective multi-dimensional knapsack problem, the bi-objective set packing problem and the bi-objective orienteering problem. Experimental results show that our method systematically provides solution sets of comparable quality with state-of-the-art methods applied to benchmark instances of these problems, within reasonable CPU effort. We conclude that the proposed algorithmic framework is a viable option when solving multi-objective optimization problems. PMID:25140071

Tricoire, Fabien

2012-01-01

31

AGLSDC: A Genetic Local Search Suitable for Parallel Computation

NASA Astrophysics Data System (ADS)

Because evolutionary algorithms (EAs) generally require many repeated evaluations of objective functions, it often takes considerable time to solve optimization problems. Parallel computation is one means to shorten the required computation time. In earlier works, the authors proposed an EA suitable for coarse-grained parallel computers, a genetic local search with distance independent diversity control (GLSDC). Though GLSDC has been applied successfully to several practical problems, its parallel efficiency abruptly drops off as the number of CPUs for computation increases. To achieve a higher parallel efficiency, the authors now propose a new EA, an asynchronous GLSDC (AGLSDC), constructed by reworking the algorithm of GLSDC. This paper introduces the proposed method and reports verification of the method through numerical experiments on several benchmark problems and a practical problem.

Kimura, Shuhei; Nakakuki, Takashi; Kirita, Seiji; Okada, Mariko

32

Nonsymmetric Search Directions for Semidefinite Programming

Two nonsymmetric search directions for semidefinite programming, the XZ and ZXsearch directions, are proposed. They are derived from a nonsymmetric formulationof the semidefinite programming problem. The XZ direction corresponds to the directlinearization of the central path equation XZ = I ; while the ZX direction correspondsto ZX = I . The XZ and ZX directions are well defined if both

Florian A. Potra; Rongqin Sheng

1997-01-01

33

Searching for an Axis-Parallel Shoreline

NASA Astrophysics Data System (ADS)

We are searching for an unknown horizontal or vertical line in the plane under the competitive framework. We design a framework for lower bounds on all cyclic and monotone strategies that result in two-sequence functionals. For optimizing such functionals we apply a method that combines two main paradigms. The given solution shows that the combination method is of general interest. Finally, we obtain the current best strategy and can prove that this is the best strategy among all cyclic and monotone strategies which is a main step toward a lower bound construction.

Langetepe, Elmar

34

PARALLEL AND CONCURRENT SEARCH FOR FAST AND\\/OR TREE SEARCH ON MULTICORE PROCESSORS

This paper proposes a fast AND\\/OR tree search algo- rithm using a multiple paths parallel and concurrent search scheme for embedded multicore processors. Currently, not only PCs or supercomputers but also information ap- pliances such as game consoles, mobile devices and car navigation systems are equipped with multicore processors for better cost performance and lower power consumption. However, the number

Fumiyo Takano; Yoshitaka Maekawa; Hironori Kasahara

2009-01-01

35

Scalable parallel word search in multicore\\/multiprocessor systems

This paper presents a parallel algorithm for fast word search to determine the set of biological words of an input DNA sequence.\\u000a The algorithm is designed to scale well on state-of-the-art multiprocessor\\/multicore systems for large inputs and large maximum\\u000a word sizes. The pattern exhibited by many sequential solutions to this problem is a repetitive execution over a large input\\u000a DNA

Frank Drews; Jens Lichtenberg; Lonnie R. Welch

2010-01-01

36

Scatter Search Algorithms for Identical Parallel Machine Scheduling Problems

We address the Identical Parallel Machine Scheduling Problem, one of the most important basic problems in scheduling theory, and some generalizations of it arising from real world situations.\\u000a We survey the current state of the art for the most performing meta-heuristic algorithms for this class of problems, with\\u000a special emphasis on recent results obtained through Scatter Search. We present insights

Manuel Iori; Silvano Martello

2008-01-01

37

Parallel Breadth-First Search on Distributed Memory Systems

Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

Computational Research Division; Buluc, Aydin; Madduri, Kamesh

2011-04-15

38

Direct drive digital servo press with high parallel control

NASA Astrophysics Data System (ADS)

Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than ?m order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.

Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi

2013-12-01

39

Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

Blocksome, Michael A; Mamidala, Amith R

2014-02-11

40

Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

Blocksome, Michael A.; Mamidala, Amith R.

2013-09-03

41

New Directions in Direct Dark Matter Searches

I present the status of direct dark matter detection with specific attention to the experimental results and their phenomenological interpretation in terms of dark matter interactions. In particular I review a new and more general approach to study signals in this field based on non-relativistic operators which parametrize more efficiently the dark matter-nucleus interactions in terms of a very limited number of relevant degrees of freedom. Then I list the major experimental results, pointing out the main uncertainties that affect the theoretical interpretation of the data. Finally, since the underlying theory that describes both the dark matter and the standard model fields is unknown, I address the uncertainties coming from the nature of the interaction. In particular, the phenomenology of a class of models in which the interaction between dark matter particles and target nuclei is of a long-range type is discussed.

Panci, Paolo

2014-01-01

42

Visual Motion-Detection Circuits in Flies: Parallel Direction- and Non-Direction-Sensitive Pathways into parallel retinotopic pathways that subsequently are reunited at higher levels. In insects, achromatic to the lobula. Further parallel subdivisions of the retinotopic pathways to the lobula plate have been suggested

Bermingham, Eldredge

43

Fast parallel tandem mass spectral library searching using GPU hardware acceleration

Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching) is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment. PMID:21545112

Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K.; Martin, Daniel B.

2011-01-01

44

USING SIMPLEX GRADIENTS OF NONSMOOTH FUNCTIONS IN DIRECT SEARCH METHODS

in the context of direct search methods like the Generalized Pattern Search (GPS) and the Mesh Adaptive Direct gradients known for smooth functions. Secondly, we test the use of simplex gradients when pattern search pattern search methods, mesh adaptive direct search. AMS subject classifications. 65D05, 90C30, 90C56 1

Vicente, LuÃs Nunes

45

Exploiting Parallelism to Accelerate Keyword Search On Deep-web Sources

Exploiting Parallelism to Accelerate Keyword Search On Deep-web Sources Tantan Liu Fan Wang Gagan,wangfa,agrawal}@cse.ohio-state.edu Abstract. Increasingly, biological data is being shared over the deep web. Many biological queries can only that exploits parallelization for accelerating search over multiple deep web data sources. An interactive, two

Agrawal, Gagan

46

Exact Quantum Search by Parallel Unitary Discrimination Schemes

We study the unsorted database search problem with items $N$ from the viewpoint of unitary discrimination. Instead of considering the famous $O(\\sqrt{N})$ Grover's the bounded-error algorithm for the original problem, we seek for the results about the exact algorithms, i.e. the ones succeed with certainty. Under the standard oracle model $\\sum_j (-1)^{\\delta_{\\tau j}}|j>< j|$, we demonstrate a tight lower bound ${2/3}N+o(N)$ of the number of queries for any parallel scheme with unentangled input states. With the assistance of entanglement, we obtain a general lower bound ${1/2}(N-\\sqrt{N})$. We provide concrete examples to illustrate our results. In particular, we show that the case of N=6 can be solved exactly with only two queries by using a bipartite entangled input state. Our results indicate that in the standard oracle model the complexity of exact quantum search with one unique solution can be strictly less than that of the calculation of OR function.

Xiaodi Wu; Runyao Duan

2008-06-09

47

DIRECT EMULATION OF CONTROL STRUCTURES BY A PARALLEL MICRO-COMPUTER

is a preliminary investigation of the organization of a parallel micro-computer designed to emulate a wide varietySLAC-127 UC-32 W=C) DIRECT EMULATION OF CONTROL STRUCTURES BY A PARALLEL MICRO-COMPUTER VICTOR R of sequential and parallel computers. This micro-computer allows tailoring of the control structure

Massachusetts at Amherst, University of

48

Neural networks and tabu search are two very significant techniques which have emerged recently for the solution of discrete optimization problems. Neural networks possess the desirable quality of implementability in massively parallel hardware while the tabu search metaheuristic shows great promise as a powerful global search method. Tabu Neural Network (TANN) integrates an analog version of the short term memory

Shivakumar Vaithyanathan; Laura I. Burke; Michael A. Magent

1996-01-01

49

Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

NASA Technical Reports Server (NTRS)

Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

1994-01-01

50

The perfect search engine is not enough: a study of orienteering behavior in directed search

This paper presents a modified diary study that investigated how people performed personally motivated searches in their email, in their files, and on the Web. Although earlier studies of directed search focused on keyword search, most of the search behavior we observed did not involve keyword search. Instead of jumping directly to their information target using keywords, our participants navigated

Jaime Teevan; Christine Alvarado; Mark S. Ackerman; David R. Karger

2004-01-01

51

J. Parallel Distrib. Comput. 70 (2010) 270281 Contents lists available at ScienceDirect

J. Parallel Distrib. Comput. 70 (2010) 270Â281 Contents lists available at ScienceDirect J University of Science and Technology (KAUST), Saudi Arabia a r t i c l e i n f o Article history: Received 18 s t r a c t In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp

Torres-VerdÃn, Carlos

52

Oblio: A Sparse Direct Solver Library for Serial and Parallel Computations

Oblio: A Sparse Direct Solver Library for Serial and Parallel Computations Florin Dobrian1 Center Abstract. We present Oblio, a sparse direct solver library running in both serial an parallel the solution of sparse linear systems of equations represents the key computation in several critical industry

Pothen, Alex

53

Bs ->mu+ mu- versus Direct Higgs Searches

We investigate the prospects for the discovery of neutral Higgs bosons with muons by direct searches at the CERN Large Hadron Collider (LHC) as well as by indirect searches in the rare decay Bs -> mu+ mu- at the Fermilab Tevatron and the LHC. Promising results have been found for the minimal supersymmetric standard model, the minimal supergravity (mSUGRA) model, and supergravity models with non-universal Higgs masses (NUHM SUGRA). For tanb \\simeq 50, we find that (i) the contours for a branching fraction of B(Bs -> mu+ mu-) = 1x10^{-8} in the parameter space are very close to the 5\\sigma contours for pp -> b \\phi^0 -> b mu+ mu- + X, \\phi^0 = h^0, H^0, A^0 at the LHC with an integrated luminosity (L) of 30 fb^{-1},(ii) the regions covered by B(Bs -> mu+ mu-) \\ge 5x10^{-9} and the discovery region for b\\phi^0 -> b mu+ mu- with 300 fb^{-1} are complementary in the mSUGRA parameter space,(iii) in NUHM SUGRA models, a discovery of B(Bs -> mu+ mu-) \\simeq 5x10^{-9} at the LHC will cover regions of the parameter space beyond the direct search for pp -> b\\phi^0 -> b mu+ mu- with L = 300 fb^{-1}.

Chung Kao; Yili Wang

2006-10-31

54

Improving the efficiency of parallel alternating directions algorithm for time dependent problems

NASA Astrophysics Data System (ADS)

We consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. A parallel algorithm based on a direction splitting approach is implemented. Our work is motivated by the need to improve the parallel efficiency of our supercomputer implementation of the parallel algorithm. We are targeting the IBM Blue Gene/P massively parallel computer, which features a 3D torus interconnect. We study the impact of the domain partitioning on the performance of the considered parallel algorithm for solving the time dependent Stokes equation. Here, different parallel partitioning strategies are given special attention. The implementation is tested on the IBM Blue Gene/P and the presented results from numerical tests confirm that decreasing the communication time better parallel properties of the algorithm are obtained.

Ganzha, Maria; Kosturski, Nikola; Lirkov, Ivan

2012-10-01

55

A directed search for extraterrestrial laser signals

NASA Technical Reports Server (NTRS)

The focus of NASA's Search for Extraterrestrial Intelligence (SETI) Program is on microwave frequencies, where receivers have the best sensitivities for the detection of narrowband signals. Such receivers, when coupled to existing radio telescopes, form an optimal system for broad area searches over the sky. For a directed search, however, such as toward specific stars, calculations show that infrared wavelengths can be equally as effective as radio wavelengths for establishing an interstellar communication link. This is true because infrared telescopes have higher directivities (gains) that effectively compensate for the lower sensitivities of infrared receivers. The result is that, for a given level of transmitted power, the signal to noise ratio for communications is equally as good at infrared and radio wavelengths. It should also be noted that the overall sensitivities of both receiver systems are quite close to their respective fundamental limits: background thermal noise for the radio frequency system and quantum noise for the infrared receiver. Consequently, the choice of an optimum communication frequency may well be determined more by the achievable power levels of transmitters rather than the ultimate sensitivities of receivers at any specific frequency. In the infrared, CO2 laser transmitters with power levels greater than 1 MW can already be built on Earth. For a slightly more advanced civilization, a similar but enormously more powerful laser may be possible using a planetary atmosphere rich in CO2. Because of these possibilities and our own ignorance of what is really the optimum search frequency, a search for narrowband signals at infrared frequencies should be a part of a balanced SETI Program. Detection of narrowband infrared signals is best done with a heterodyne receiver functionally identical to a microwave spectral line receiver. We have built such a receiver for the detection of CO2 laser radiation at wavelengths near 10 microns. The spectrometer uses a high-speed HgCdTe diode as the photomixer and a small CO2 laser as the local oscillator. Output signals in the intermediate frequency range 0.1-2.6 GHz are processed by a 1000-channel acousto-optic signal processor. The receiver is being used on a 1.5-m telescope on Mt. Wilson to survey a selected sample of 150 nearby stars. The current status of the work is discussed along with future project plans.

Betz, A.

1991-01-01

56

Design and Implementation of a Scalable Parallel Direct Solver for Sparse Symmetric Positive De Kumarz Abstract Solving large sparse systemsof linear equations is at the core of manyproblemsin engineering and scienti c computing. It has long been a challenge to develop parallel formulations of sparse

Karypis, George

57

The APHID Parallel Search Algorithm Mark G. Brockington and Jonathan Schaeffer

The APHID Parallel Search Algorithm Mark G. Brockington and Jonathan Schaeffer Department@cs.ualberta.ca Abstract This paper introduces the APHID (Asynchronous Par- allel Hierarchical Iterative Deepening) game-tree search algorithm. APHID represents a departure from the ap- proaches used in practice. Instead

Schaeffer, Jonathan

58

Direct binary search (DBS) algorithm with constraints

NASA Astrophysics Data System (ADS)

In this paper, we describe adding constraints to the Direct Binary Search (DBS) algorithm. An example of a useful constraint, illustrated in this paper, is having only one dot per column and row. DBS with such constraints requires greater than two toggles during each trial operation. Implementations of the DBS algorithm traditionally limit operations to either one toggle or swap during each trial. The example case in this paper produces a wrap-around pattern with uniformly distributed ON pixels which will have a pleasing appearance with precisely one ON pixel per each column and row. The algorithm starts with an initial continuous tone image and an initial pattern having only one ON pixel per column and row. The auto correlation function of Human Visual System (HVS) model is determined along with an initial perceived error. Multiple operation pixel error processing during each iteration is used to enforce the one ON pixel per column and row constraint. The constraint of a single ON pixel per column and row is used as an example in this paper. Further modification of the DBS algorithm for other constraints is possible, based on the details given in the paper. A mathematical framework to extend the algorithm to the more general case of Direct Multi-bit Search (DMS) is presented.

Chandu, Kartheek; Stanich, Mikel; Wu, Chai Wah; Trager, Barry

2013-02-01

59

Nonlinearly-constrained optimization using asynchronous parallel generating set search.

Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.

Griffin, Joshua D.; Kolda, Tamara Gibson

2007-05-01

60

We consider a family of primal\\/primal-dual\\/dual search directions forthe monotone LCP over the space of n \\\\Theta n symmetric block-diagonalmatrices. We consider two infeasible predictor-corrector path-followingmethods using these search directions, with the predictor and correctorsteps used either in series (similar to the Mizuno-Todd-Ye method) orin parallel (similar to Mizuno et al.\\/McShane's method). The methodsattain global linear convergence with a convergence

Paul Tseng

1996-01-01

61

Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

2012-01-01

62

Direct Dark Matter Search with XENON100

The XENON100 experiment is the second phase of the XENON program for the direct detection of the dark matter in the universe. The XENON100 detector is a two-phase Time Projection Chamber filled with 161 kg of ultra pure liquid xenon. The results from 224.6 live days of dark matter search with XENON100 are presented. No evidence for dark matter in the form of WIMPs is found, excluding spin-independent WIMP-nucleon scattering cross sections above 2 $\\times$ 10$^{-45}$ cm$^2$ for a 55 GeV/c$^2$ WIMP at 90% confidence level (C.L.). The most stringent limit is established on the spin-dependent WIMP-neutron interaction for WIMP masses above 6 GeV/c$^2$, with a minimum cross section of 3.5 $\\times$ 10$^{-40}$ cm$^2$ (90% C.L.) for a 45 GeV/c$^2$ WIMP. The same dataset is used to search for axions and axion-like-particles. The best limits to date are set on the axion-electron coupling constant for solar axions, $g_{Ae}$ < 7.7 $\\times$ 10$^{-12}$ (90% C.L.), and for axion-like-particles, $g_{Ae}$ < 1 $\\times$ 10$^{-12}$ (90% C.L.) for masses between 5 and 10 keV/c$^2$.

S. E. A. Orrigo; for the XENON Collaboration

2015-01-14

63

Direct Dark Matter Search with XENON100

The XENON100 experiment is the second phase of the XENON program for the direct detection of the dark matter in the universe. The XENON100 detector is a two-phase Time Projection Chamber filled with 161 kg of ultra pure liquid xenon. The results from 224.6 live days of dark matter search with XENON100 are presented. No evidence for dark matter in the form of WIMPs is found, excluding spin-independent WIMP-nucleon scattering cross sections above 2 $\\times$ 10$^{-45}$ cm$^2$ for a 55 GeV/c$^2$ WIMP at 90% confidence level (C.L.). The most stringent limit is established on the spin-dependent WIMP-neutron interaction for WIMP masses above 6 GeV/c$^2$, with a minimum cross section of 3.5 $\\times$ 10$^{-40}$ cm$^2$ (90% C.L.) for a 45 GeV/c$^2$ WIMP. The same dataset is used to search for axions and axion-like-particles. The best limits to date are set on the axion-electron coupling constant for solar axions, $g_{Ae}$ < 7.7 $\\times$ 10$^{-12}$ (90% C.L.), and for axion-like-particles, $g_{Ae}$ < 1 $\\times$ 10...

Orrigo, S E A

2015-01-01

64

Searching for an axis-parallel shoreline Elmar Langetepe

path |Ol| O l pl l Dec. 19th 2010 Searching for a shoreline c Elmar Langetepe COCOA '10 2 #12;Searching at pl, shortest path |Ol| · Competitive ratio: Performance C := supl | pl O | |Ol| O l pl l | pl O| |Ol at pl, shortest path |Ol| · Competitive ratio: Performance C := supl | pl O | |Ol| worst-case! O l pl l

Eckmiller, Rolf

65

High-performance parallel sparse-direct triangular solves (Invited)

NASA Astrophysics Data System (ADS)

Geophysical inverse problems are increasingly posed in the frequency domain in a manner which requires solving many challenging heterogeneous 3D Helmholtz or linear elastic wave equations at each iteration. One effective means of solving such problems, at least when there is no large-scale internal resonance, is to use moving-PML "sweeping preconditioners". Each application of the sweeping preconditioner involves performing many modest-sized sparse-direct triangular solves -- unfortunately, one at a time. While P. et al. have shown that, with a careful implementation of a distributed sparse-direct solver [1,2], challenging 3D problems approaching a billion degrees of freedom can be solved in a few minutes using less than 10,000 cores, this talk discusses how to leverage the existence of many right-hand sides in order to increase the performance of the preconditioner applications by orders of magnitude. [1] http://github.com/poulson/Clique [2] http://github.com/poulson/PSP

Poulson, J.; Ying, L.

2013-12-01

66

iPRIDE: a parallel integrated circuit simulator using direct method

A parallel circuit simulator, iPRIDE, which uses a direct solution method and runs on a shared-memory multiprocessor is described. The simulator is based on a multilevel node tearing approach which produces a nested bordered-block-diagonal (BBD) form of the circuit equation matrix. The parallel solution of the nested BBD matrix is described. Its efficiency is shown to depend on how the

Mi-Chang Chang; I. N. Hajj

1988-01-01

67

APPSPACK 4.0 : asynchronous parallel pattern search for derivative-free optimization.

APPSPACK is software for solving unconstrained and bound constrained optimization problems. It implements an asynchronous parallel pattern search method that has been specifically designed for problems characterized by expensive function evaluations. Using APPSPACK to solve optimization problems has several advantages: No derivative information is needed; the procedure for evaluating the objective function can be executed via a separate program or script; the code can be run in serial or parallel, regardless of whether or not the function evaluation itself is parallel; and the software is freely available. We describe the underlying algorithm, data structures, and features of APPSPACK version 4.0 as well as how to use and customize the software.

Gray, Genetha Anne; Kolda, Tamara Gibson

2004-12-01

68

A direct method for string to deterministic finite automaton conversion for fast text searching

This paper describes a simple technique for generating a minimum state deterministic finite automation (DFA) directly from a restricted set of regular expressions. The resulting DFA is used for string searches that do not alter the target text and require only a single pass through the input. The technique is used for very fast, mixed or same case, single or multiple string searches. The technique is also capable of directly converting multiple strings with wild card character specifiers by constructing parallel DFAs. Construction of the automation is performed in a time proportional to the length of the regular expression. Algorithms are given for construction of the automatons and recognizers. Although the regular expression to DFA parser does not support all classes of regular expressions, it supports a sufficient subset to make it useful for the most commonly encountered text searching functions.

Berlin, G.J.

1991-12-31

69

A direct method for string to deterministic finite automaton conversion for fast text searching

This paper describes a simple technique for generating a minimum state deterministic finite automation (DFA) directly from a restricted set of regular expressions. The resulting DFA is used for string searches that do not alter the target text and require only a single pass through the input. The technique is used for very fast, mixed or same case, single or multiple string searches. The technique is also capable of directly converting multiple strings with wild card character specifiers by constructing parallel DFAs. Construction of the automation is performed in a time proportional to the length of the regular expression. Algorithms are given for construction of the automatons and recognizers. Although the regular expression to DFA parser does not support all classes of regular expressions, it supports a sufficient subset to make it useful for the most commonly encountered text searching functions.

Berlin, G.J.

1991-01-01

70

A comparison of some parallel gametree search algorithms (Revised version)

multiprocessor. The application program must specify the root of the problem tree, how to generate children combining its children's values and how to spread information either globally or locally throughout the tree search ing unnecessary parts of the tree while keeping many processors fruitfully busy. The algorithms

Finkel, Raphael

71

Cunning Ant System for Quadratic Assignment Problem with Local Search and Parallelization

Cunning Ant System for Quadratic Assignment Problem with Local Search and Parallelization. The previously proposed cunning ant system (cAS), a vari- ant of the ACO algorithm, worked well on the TSP have proposed a variant of the ACO algorithm called the cunning Ant System (cAS) and evaluated it using

Tsutsui, Shigeyoshi

72

Searching Uncertain Data Represented by Non-axis Parallel Gaussian Mixture Models

Efficient similarity search in uncertain data is a central problem in many modern applications such as biometric identification, stock market analysis, sensor networks, medical imaging, etc. In such applications, the feature vector of an object is not exactly known but is rather defined by a probability density function like a Gaussian Mixture Model (GMM). Previous work is limited to axis-parallel

Katrin Haegler; Frank Fiedler; Christian Bohm

2012-01-01

73

Attentional Control via Parallel Target-Templates in Dual-Target Search

Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793

Barrett, Doug J. K.; Zobay, Oliver

2014-01-01

74

Attentional control via parallel target-templates in dual-target search.

Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this 'dual-target cost' may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793

Barrett, Doug J K; Zobay, Oliver

2014-01-01

75

Performance analysis of parallel branch and bound search with the hypercube architecture

NASA Technical Reports Server (NTRS)

With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

Mraz, Richard T.

1987-01-01

76

The parallelization, design and scalability of the \\sky code to search for periodic gravitational waves from rotating neutron stars is discussed. The code is based on an efficient implementation of the F-statistic using the Fast Fourier Transform algorithm. To perform an analysis of data from the advanced LIGO and Virgo gravitational wave detectors' network, which will start operating in 2015, hundreds of millions of CPU hours will be required - the code utilizing the potential of massively parallel supercomputers is therefore mandatory. We have parallelized the code using the Message Passing Interface standard, implemented a mechanism for combining the searches at different sky-positions and frequency bands into one extremely scalable program. The parallel I/O interface is used to escape bottlenecks, when writing the generated data into file system. This allowed to develop a highly scalable computation code, which would enable the data analysis at large scales on acceptable time scales. Benchmarking of the code on a Cray XE6 system was performed to show efficiency of our parallelization concept and to demonstrate scaling up to 50 thousand cores in parallel.

Gevorg Poghosyan; Sanchit Matta; Achim Streit; Micha? Bejger; Andrzej Królak

2014-10-14

77

A Parallel Framework for Multipoint Spiral Search in ab Initio Protein Structure Prediction

Protein structure prediction is computationally a very challenging problem. A large number of existing search algorithms attempt to solve the problem by exploring possible structures and finding the one with the minimum free energy. However, these algorithms perform poorly on large sized proteins due to an astronomically wide search space. In this paper, we present a multipoint spiral search framework that uses parallel processing techniques to expedite exploration by starting from different points. In our approach, a set of random initial solutions are generated and distributed to different threads. We allow each thread to run for a predefined period of time. The improved solutions are stored threadwise. When the threads finish, the solutions are merged together and the duplicates are removed. A selected distinct set of solutions are then split to different threads again. In our ab initio protein structure prediction method, we use the three-dimensional face-centred-cubic lattice for structure-backbone mapping. We use both the low resolution hydrophobic-polar energy model and the high-resolution 20 × 20 energy model for search guiding. The experimental results show that our new parallel framework significantly improves the results obtained by the state-of-the-art single-point search approaches for both energy models on three-dimensional face-centred-cubic lattice. We also experimentally show the effectiveness of mixing energy models within parallel threads. PMID:24744779

Rashid, Mahmood A.; Newton, M. A. Hakim; Hoque, Md Tamjidul; Sattar, Abdul

2014-01-01

78

Target intersection probabilities for parallel-line and continuous-grid types of search

The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

McCammon, R.B.

1977-01-01

79

Scalar and Parallel Optimized Implementation of the Direct Simulation Monte Carlo Method

This paper describes a new concept for the implementation of the direct simulation Monte Carlo (DSMC) method. It uses a localized data structure based on a computational cell to achieve high performance, especially on workstation processors, which can also be used in parallel. Since the data structure makes it possible to freely assign any cell to any processor, a domain

Stefan Dietrich; Iain D. Boyd

1996-01-01

80

Direct Simulation Based Model-Predictive Control of Flow Maldistribution in Parallel Microchannels

sinks and heat exchangers, to improve heat transfer effectiveness. For improved efficiency and cooling Mathieu Martin Chris Patton John Schmitt Sourabh V. Apte School of Mechanical, Industrial that employ parallel microchannels for heat transfer. In this work, direct numerical simulations of fluid flow

Apte, Sourabh V.

81

J. Parallel Distrib. Comput. 69 (2009) 725736 Contents lists available at ScienceDirect

J. Parallel Distrib. Comput. 69 (2009) 725Â736 Contents lists available at ScienceDirect J rings Xianbing Wanga , Yong Meng Teob,c, a Computer Center, Wuhan University, Wuhan 430072, China b of Computer Science, National University of Singapore, Singapore 117417, Singapore a r t i c l e i n f o

Teo, Yong-Meng

82

Presents a general solution for the direct kinematics of planar three-degree-of-freedom parallel manipulators. It has been shown elsewhere, using geometric considerations, that this problem can lead to a maximum of six real solutions. The formulation developed leads to a polynomial of the sixth order which is hence minimal. This is illustrated with an example, taken from the literature, for which

ClCment M. Gosselin; Jaouad Sefrioui

1991-01-01

83

Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an

H. M. Cave; K.-C. Tseng; J.-S. Wu; M. C. Jermy; J.-C. Huang; S. P. Krumdieck

2008-01-01

84

equations is at the heart of many engineering and scientific computing applications. There are two methodsDesign and Implementation of a Scalable Parallel Direct Solver for Sparse Symmetric Positive Karypis z Vipin Kumar z Abstract Solving large sparse systems of linear equations is at the core of many

Kumar, Vipin

85

Parallel electric fields in the upward current region of the aurora: Indirect and direct current region of the aurora focusing on the structure of electric fields at the boundary between account of the electric fields in the upward current region of the aurora as observed by the Fast Auroral

California at Berkeley, University of

86

Phase and chemical equilibrium calculations by direct search optimization

Direct search optimization is applied to Gibbs free energy minimization to determine phase compositions at equilibrium. The method selected is the random search optimization procedure of Luus and Jaakola, which has been shown to be successful for solving difficult global optimization problems. It is implemented in a multipass fashion where the region size for a variable at the beginning of

Yeow Peng Lee; Gade Pandu Rangaiah; Rein Luus

1999-01-01

87

Rapid parallel attentional target selection in single-color and multiple-color visual search.

Previous work has demonstrated that when targets are defined by a constant feature, attention can be directed rapidly and in parallel to sequentially presented target objects at different locations. We assessed how fast attention is allocated to multiple objects when this process cannot be controlled by a unique color-specific attentional template. N2pc components were measured as temporal markers of the attentional selection of 2 color-defined targets that were presented in rapid succession. Both targets either had the same color (one color task) or differed in color (two color task). Although there were small but systematic delays of target selection in the two color task relative to the one color task, attention was allocated extremely rapidly to both target objects in the two color task, which is inconsistent with the hypothesis that their selection was based on a slow switch between different color templates. Two follow-up experiments demonstrated that these delays did not reflect template switch costs, but were the result of competitive interactions between simultaneously active attentional templates. These results show that the control of focal attention during multiple-feature search operates much faster and more flexibly than is usually assumed. PMID:25485665

Grubert, Anna; Eimer, Martin

2015-02-01

88

A GPU based implementation of direct multi-bit search (DMS) screen algorithm

NASA Astrophysics Data System (ADS)

In this paper, we study the feasibility for using programmable Graphics Processing Unit (GPU) technology for image halftoning, in particular implementing the computationally intense Direct Multi-bit Search (DMS) Screen algorithm. Multi-bit screening is an extension of binary screening, in which every pixel in continuoustone image can be rendered to one among multiple output states. For example, a 2 bit printer is capable of printing with four different drop sizes. In our previous work, we have extended the Direct Binary Search (DBS) to the multi-bit case using Direct Multi-bit Search (DMS) where at every pixel the algorithm chooses the best drop output state to create a visually pleasing halftone pattern without any user defined guidance. This process is repeated throughout the entire range of gray levels while satisfying the stacking constraint to create a high quality multi-bit screen (dither mask). In this paper, we illustrate how employing Graphics Processing Units (GPU) can speed-up intensive DMS image processing operations. Particularly, we illustrate how different modules can be been parallelized. The main goal of many of the previous articles regarding DBS is to decrease the execution time of the algorithm. One of the most common approaches is to decrease the neighborhood size or filter size. The proposed parallel approach allows us to use a large neighborhood and filter size, to achieve the highest halftone quality, while having minimal impact on performance. In addition, we also demonstrate processing several non-overlapping neighborhoods in parallel, by utilizing the GPU's parallel architecture, to further improve the computational efficiency.

Trager, Barry; Chandu, Kartheek; Wu, Chai Wah; Stanich, Mikel

2013-02-01

89

Directed search for continuous gravitational waves from the Galactic center

We present the results of a directed search for continuous gravitational waves from unknown, isolated neutron stars in the Galactic center region, performed on two years of data from LIGO’s fifth science run from two LIGO ...

Aggarwal, Nancy

90

PARALIGN: rapid and sensitive sequence similarity searches powered by parallel computing technology

PARALIGN is a rapid and sensitive similarity search tool for the identification of distantly related sequences in both nucleotide and amino acid sequence databases. Two algorithms are implemented, accelerated Smith–Waterman and ParAlign. The ParAlign algorithm is similar to Smith–Waterman in sensitivity, while as quick as BLAST for protein searches. A form of parallel computing technology known as multimedia technology that is available in modern processors, but rarely used by other bioinformatics software, has been exploited to achieve the high speed. The software is also designed to run efficiently on computer clusters using the message-passing interface standard. A public search service powered by a large computer cluster has been set-up and is freely available at , where the major public databases can be searched. The software can also be downloaded free of charge for academic use. PMID:15980529

Sæbø, Per Eystein; Andersen, Sten Morten; Myrseth, Jon; Laerdahl, Jon K.; Rognes, Torbjørn

2005-01-01

91

to transfer a part from a working area to another one (e.g. conveyors). A lot of 2 dof parallel mechanisms to be guaranteed in the direction perpendicular to the plane to limit vibrations; being stiff in the direction

Boyer, Edmond

92

Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs

Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating ?(n?) messages (? being the size of the alphabet). Results In this paper we present a ?(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to ?. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of ?(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET. PMID:21078174

2010-01-01

93

Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

NASA Technical Reports Server (NTRS)

In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

Povitsky, A.

1998-01-01

94

GLOBAL AND LOCAL OPTIMIZATION USING DIRECT SEARCH 1 ...

OE/MAT/UI0297/2011 (CMA) and the grant PTDC/MAT/116736/2010. .... center will be conducted by testing the directions belonging to a positive spanning .... Adaptive Direct Search (MADS) [4]) or by imposing a sufficient decrease condition,.

2013-10-23

95

Parallel Ear Decomposition Search (EDS) and st-Numbering in Graphs

[LEC-67] linear time serial algorithm for testing planarity of graphs uses the linear timeserial algorithm of [ET-76] for st-numbering. This st-numbering algorithm is based ondepth-first search (DFS). A known conjecture states that DFS, which is a key techniquein designing serial algorithms, is not amenable to poly-log time parallelism using"around linearly" (or even polynomially) many processors. The first contribution of thispaper

Yael Maon; Baruch Schieber; Uzi Vishkin

1986-01-01

96

ParAlign: a parallel sequence alignment algorithm for rapid and sensitive database searches

There is a need for faster and more sensitive algorithms for sequence similarity searching in view of the rapidly increasing amounts of genomic sequence data available. Parallel processing capabilities in the form of the single instruction, multiple data (SIMD) technology are now available in common microprocessors and enable a single microprocessor to perform many operations in parallel. The ParAlign algorithm has been specifically designed to take advantage of this technology. The new algorithm initially exploits parallelism to perform a very rapid computation of the exact optimal ungapped alignment score for all diagonals in the alignment matrix. Then, a novel heuristic is employed to compute an approximate score of a gapped alignment by combining the scores of several diagonals. This approximate score is used to select the most interesting database sequences for a subsequent Smith–Waterman alignment, which is also parallelised. The resulting method represents a substantial improvement compared to existing heuristics. The sensitivity and specificity of ParAlign was found to be as good as Smith–Waterman implementations when the same method for computing the statistical significance of the matches was used. In terms of speed, only the significantly less sensitive NCBI BLAST 2 program was found to outperform the new approach. Online searches are available at http://dna.uio.no/search/ PMID:11266569

Rognes, Torbjørn

2001-01-01

97

Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy

Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines us- ing up to 64 cores, and a large-scale high-performance cluster using up to 1024 processors. We show that this approach scales well, allowing the effective utilization of large amount of distributed memory to optimally solve problems which require more than a terabyte of RAM. We also compare HDA* to Transposition-table Driven Scheduli...

Kishimoto, Akihiro; Botea, Adi

2012-01-01

98

NASA Technical Reports Server (NTRS)

A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

Lee, J.; Kim, K.

1991-01-01

99

NASA Astrophysics Data System (ADS)

A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

Lee, J.; Kim, K.

100

NASA Technical Reports Server (NTRS)

An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.

Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas

2000-01-01

101

Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

NASA Technical Reports Server (NTRS)

The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

Joslin, Ronald D.; Zubair, Mohammad

1993-01-01

102

Two Quantum Direct Communication Protocols Based on Quantum Search Algorithm

NASA Astrophysics Data System (ADS)

Based on the properties of two-qubit Grover's quantum search algorithm, we propose two quantum direct communication protocols, including a deterministic secure quantum communication and a quantum secure direct communication protocol. Secret messages can be directly sent from the sender to the receiver by using two-qubit unitary operations and the single photon measurement with one of the proposed protocols. Theoretical analysis shows that the security of the proposed protocols can be highly ensured.

Xu, Shu-Jiang; Chen, Xiu-Bo; Wang, Lian-Hai; Niu, Xin-Xin; Yang, Yi-Xian

2014-12-01

103

Direct detection searches for axion dark matter

NASA Astrophysics Data System (ADS)

The axion is both a compelling dark matter candidate and provides an elegant solution to the strong CP problem. The axion haloscope technique has the potential to detect dark matter axions. ADMX (the Axion Dark Matter eXperiment) is an implementation of the axion haloscope technique, and has undergone a series of sensitivity-improving upgrades. With the impending addition of a dilution refrigerator, ADMX is poised to search a large region of plausible dark matter axion masses. Meanwhile, a number of other axion experimental techniques are being considered to explore other axion masses relevant to dark matter.

Rybka, Gray

2014-09-01

104

A direct search for Dirac magnetic monopoles

Magnetic monopoles are highly ionizing and curve in the direction of the magnetic field. A new dedicated magnetic monopole trigger at CDF, which requires large light pulses in the scintillators of the time-of-flight system, ...

Mulhearn, Michael James

2005-01-01

105

Direct Search for Low Mass Dark Matter Particles with CCDs

A direct dark matter search is performed using fully-depleted high-resistivity CCD detectors. Due to their low electronic readout noise (RMS ~7 eV) these devices operate with a very low detection threshold of 40 eV, making the search for dark matter particles with low masses (~5 GeV) possible. The results of an engineering run performed in a shallow underground site are presented, demonstrating the potential of this technology in the low mass region.

Barreto, J [Rio de Janeiro Federal U.; Cease, H.; Diehl, H.T.; Estrada, J.; Flaugher, B.; Harrison, N.; Jones, J.; Kilminster, B [Fermilab; Molina, J [Asuncion Natl. U.; Smith, J.; Sonnenschein, A [Fermilab

2012-05-15

106

Direction selectivity in V1 of alert monkeys: evidence for parallel pathways for motion processing

In primary visual cortex (V1) of macaque monkeys, motion selective cells form three parallel pathways. Two sets of direction selective cells, one in layer 4B, and the other in layer 6, send parallel direct outputs to area MT in the dorsal cortical stream. We show that these two outputs carry different types of spatial information. Direction selective cells in layer 4B have smaller receptive fields than those in layer 6, and layer 4B cells are more selective for orientation. We present evidence for a third direction selective pathway that flows through V1 layers 4Cm (the middle tier of layer 4C) to layer 3. Cells in layer 3 are very selective for orientation, have the smallest receptive fields in V1, and send direct outputs to area V2. Layer 3 neurons are well suited to contribute to detection and recognition of small objects by the ventral cortical stream, as well as to sense subtle motions within objects, such as changes in facial expressions. PMID:17962332

Gur, Moshe; Snodderly, D Max

2007-01-01

107

NASA Technical Reports Server (NTRS)

This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

Morgan, Philip E.

2004-01-01

108

Direct Non-baryonic Dark Matter Search - An experimental Review

This review will present the latest advances in the search for non-baryonic dark matter from an experimental point of view, focusing more particularly on the direct detection approach. After a brief reminder of the main motivations for this search, we will expose the physical basis of WIMP detection, its advantages and limitations. The current techniques having achieved the most competitive results in terms of sensitivity will then be discussed. We will conclude with a rapid overview of the future of direct detection experiments, the techniques considered and their sensitivity goals.

S. Fiorucci

2004-06-11

109

In Search of Self-Directed Learning.

ERIC Educational Resources Information Center

If the learning organization is ever to become a reality and if employees are to be continuous learners, the notion of self-directed learning has to move beyond the buzzword phase and become a major force in employee training. (Author)

Zemke, Ron

1998-01-01

110

Co-ordination of directional overcurrent protection with load current for parallel feeders

Directional phase overcurrent relays are commonly applied at the receiving ends of parallel feeders or transformer feeders. Their purpose is to ensure full discrimination of main or back-up power system overcurrent protection for a fault near the receiving end of one feeder. This paper reviews this type of relay application and highlights load current setting constraints for directional protection. Such constraints have not previously been publicized in well-known text books. A directional relay current setting constraint that is suggested in some text books is based purely on thermal rating considerations for older technology relays. This constraint may not exist with modern numerical relays. In the absence of any apparent constraint, there is a temptation to adopt lower current settings with modern directional relays in relation to reverse load current at the receiving ends of parallel feeders. This paper identifies the danger of adopting very low current settings without any special relay feature to ensure protection security with load current during power system faults. A system incident recorded by numerical relays is also offered to highlight this danger. In cases where there is a need to infringe the identified constraints an implemented and testing relaying technique is proposed.

Wright, J.W.; Lloyd, G.; Hindle, P.J. [Alstom, Inc., Stafford (United Kingdom). T and D Protection and Control

1999-11-01

111

Dark Matter Direct Detection and Lhc Searches

NASA Astrophysics Data System (ADS)

Direct detection experiments are reporting intriguing indications of a possible dark matter signal, the most noticeable case being the annual modulation effect observed by the DAMA experiments. A relevant interpretation of these results is in terms of light neutralino dark matter, arising in supersymmetric models where gaugino universality is broken. These supersymmetric models possess specific features that differentiate them from more typical supersymmetric scenarios and that can be tested at the LHC.

Fornengo, N.

2015-01-01

112

Direct and Inverse Kinematics of a Novel Tip-Tilt-Piston Parallel Manipulator

NASA Technical Reports Server (NTRS)

Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most 16 assembly configurations for the manipulator. In addition, it is shown that the 16 solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.

Tahmasebi, Farhad

2004-01-01

113

The JCSG MR Pipeline: Optimized Alignments, Multiple Models And Parallel Searches

The success rate of molecular replacement (MR) falls considerably when search models share less than 35% sequence identity with their templates, but can be improved significantly by using fold-recognition methods combined with exhaustive MR searches. Models based on alignments calculated with fold-recognition algorithms are more accurate than models based on conventional alignment methods such as FASTA or BLAST, which are still widely used for MR. In addition, by designing MR pipelines that integrate phasing and automated refinement and allow parallel processing of such calculations, one can effectively increase the success rate of MR. Here, updated results from the JCSG MR pipeline are presented, which to date has solved 33 MR structures with less than 35% sequence identity to the closest homologue of known structure. By using difficult MR problems as examples, it is demonstrated that successful MR phasing is possible even in cases where the similarity between the model and the template can only be detected with fold-recognition algorithms. In the first step, several search models are built based on all homologues found in the PDB by fold-recognition algorithms. The models resulting from this process are used in parallel MR searches with different combinations of input parameters of the MR phasing algorithm. The putative solutions are subjected to rigid-body and restrained crystallographic refinement and ranked based on the final values of free R factor, figure of merit and deviations from ideal geometry. Finally, crystal packing and electron-density maps are checked to identify the correct solution. If this procedure does not yield a solution with interpretable electron-density maps, then even more alternative models are prepared. The structurally variable regions of a protein family are identified based on alignments of sequences and known structures from that family and appropriate trimmings of the models are proposed. All combinations of these trimmings are applied to the search models and the resulting set of models is used in the MR pipeline. It is estimated that with the improvements in model building and exhaustive parallel searches with existing phasing algorithms, MR can be successful for more than 50% of recognizable homologues of known structures below the threshold of 35% sequence identity. This implies that about one-third of the proteins in a typical bacterial proteome are potential MR targets.

Schwarzenbacher, R.; Godzik, A.; Jaroszewski, L.

2009-05-27

114

In order to acquire their native languages, children must learn richly structured systems with regularities at multiple levels. While structure at different levels could be learned serially, e.g., speech segmentation coming before word-object mapping, redundancies across levels make parallel learning more efficient. For instance, a series of syllables is likely to be a word not only because of high transitional probabilities, but also because of a consistently co-occurring object. But additional statistics require additional processing, and thus might not be useful to cognitively constrained learners. We show that the structure of child-directed speech makes simultaneous speech segmentation and word learning tractable for human learners. First, a corpus of child-directed speech was recorded from parents and children engaged in a naturalistic free-play task. Analyses revealed two consistent regularities in the sentence structure of naming events. These regularities were subsequently encoded in an artificial language to which adult participants were exposed in the context of simultaneous statistical speech segmentation and word learning. Either regularity was independently sufficient to support successful learning, but no learning occurred in the absence of both regularities. Thus, the structure of child-directed speech plays an important role in scaffolding speech segmentation and word learning in parallel. PMID:23162487

Yurovsky, Daniel; Yu, Chen; Smith, Linda B.

2012-01-01

115

Differential Client Satisfaction with Holland's Self-Directed Search.

ERIC Educational Resources Information Center

Satisfaction with Holland's Self-Directed Search (SDS) was measured using a sample of college freshmen, dichotomized on Rotter's construct of locus of control and Holland's construct of differentiation. Results support the prediction that internally controlled individuals would be more satisfied with the SDS than externally controlled students.…

Byrne, Thomas P.; And Others

1979-01-01

116

Nonlinear programming by mesh adaptive direct searches 1

This paper is intended not as a survey, but as an introduction to some ideas behind the class of mesh adaptive direct search (MADS) methods. Space limitations dictate a brief description of various key topics to be provided along with several references, which themselves provide further references. The convergence theory for the methods presented here make a case for clos-

Mark A. Abramson; J. E. Dennis Jr

117

A Direct Search for Dirac Magnetic Monopoles

Magnetic monopoles are highly ionizing and curve in the direction of the magnetic field. A new dedicated magnetic monopole trigger at CDF, which requires large light pulses in the scintillators of the time-of-flight system, remains highly efficient to monopoles while consuming a tiny fraction of the available trigger bandwidth. A specialized offline reconstruction checks the central drift chamber for large dE/dx tracks which do not curve in the plane perpendicular to the magnetic field. We observed zero monopole candidate events in 35.7 pb{sup -1} of proton-antiproton collisions at {radical}s = 1.96 TeV. This implies a monopole production cross section limit {sigma} < 0.2 pb for monopoles with mass between 100 and 700 GeV, and, for a Drell-Yan like pair production mechanism, a mass limit m > 360 GeV.

Mulhearn, Michael James; /MIT

2004-10-01

118

Carbon Nanotubes Potentialities in Directional Dark Matter Searches

We propose a new solution to the problem of dark matter directional detection based on large parallel arrays of carbon nanotubes. The phenomenon of ion channeling in single wall nanotubes is simulated to calculate the expected number of recoiling carbon ions, due to the hypothetical scattering with dark matter particles, subsequently being driven along their longitudinal extension. As shown by explicit calculation, the relative orientation of the carbon nanotube array with respect to the direction of motion of the Sun has an appreciable effect on the channeling probability of the struck ion and this provides the required detector anisotropic response.

L. M. Capparelli; G. Cavoto; D. Mazzilli; A. D. Polosa

2014-12-28

119

Direct searches for dark matter: Recent?results

There is abundant evidence for large amounts of unseen matter in the universe. This dark matter, by its very nature, couples feebly to ordinary matter and is correspondingly difficult to detect. Nonetheless, several experiments are now underway with the sensitivity required to detect directly galactic halo dark matter through their interactions with matter and radiation. These experiments divide into two broad classes: searches for weakly interacting massive particles (WIMPs) and searches for axions. There exists a very strong theoretical bias for supposing that supersymmetry (SUSY) is a correct description of nature. WIMPs are predicted by this SUSY theory and have the required properties to be dark matter. These WIMPs are detected from the byproducts of their occasional recoil against nucleons. There are efforts around the world to detect these rare recoils. The WIMP part of this overview focuses on the cryogenic dark matter search (CDMS) underway in California. Axions, another favored dark matter candidate, are predicted to arise from a minimal extension of the standard model that explains the absence of the expected large CP violating effects in strong interactions. Axions can, in the presence of a large magnetic field, turn into microwave photons. It is the slight excess of photons above noise that signals the axion. Axion searches are underway in California and Japan. The axion part of this overview focuses on the California effort. Brevity does not allow me to discuss other WIMP and axion searches, likewise for accelerator and satellite based searches; I apologize for their omission. PMID:9419325

Rosenberg, Leslie J.

1998-01-01

120

The parallel mean free path of solar energetic particles (SEPs), which is determined by physical properties of SEPs as well as those of solar wind, is a very important parameter in space physics to study the transport of charged energetic particles in the heliosphere, especially for space weather forecasting. In space weather practice, it is necessary to find a quick approach to obtain the parallel mean free path of SEPs for a solar event. In addition, the adiabatic focusing effect caused by a spatially varying mean magnetic field in the solar system is important to the transport processes of SEPs. Recently, Shalchi presented an analytical description of the parallel diffusion coefficient with adiabatic focusing. Based on Shalchi's results, in this paper we provide a direct analytical formula as a function of parameters concerning the physical properties of SEPs and solar wind to directly and quickly determine the parallel mean free path of SEPs with adiabatic focusing. Since all of the quantities in the analytical formula can be directly observed by spacecraft, this direct method would be a very useful tool in space weather research. As applications of the direct method, we investigate the inherent relations between the parallel mean free path and various parameters concerning physical properties of SEPs and solar wind. Comparisons of parallel mean free paths with and without adiabatic focusing are also presented.

He, H.-Q.; Wan, W., E-mail: hqhe@mail.iggcas.ac.cn, E-mail: wanw@mail.iggcas.ac.cn [Beijing National Observatory of Space Environment, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing 100029 (China)

2012-03-01

121

Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly.

Using EndNote version 7.0, the authors tested the search capabilities of the EndNote search engine for retrieving citations from MEDLINE for importation into EndNote, a citation management software package. Ovid MEDLINE and PubMed were selected for the comparison. Several searches were performed on Ovid MEDLINE and PubMed using EndNote as the search engine, and the same searches were run on both Ovid and PubMed directly. Findings indicate that it is preferable to search MEDLINE directly rather than using EndNote. The publishers of EndNote do warn its users about the limitations of their product as a search engine when searching external databases. In this article, the limitations of EndNote as a search engine for searching MEDLINE were explored as related to MeSH, non-MeSH, citation verification, and author searching. PMID:15364649

Gall, Carole; Brahmi, Frances A

2004-01-01

122

Directed search for continuous gravitational waves from the Galactic center

NASA Astrophysics Data System (ADS)

We present the results of a directed search for continuous gravitational waves from unknown, isolated neutron stars in the Galactic center region, performed on two years of data from LIGO’s fifth science run from two LIGO detectors. The search uses a semicoherent approach, analyzing coherently 630 segments, each spanning 11.5 hours, and then incoherently combining the results of the single segments. It covers gravitational wave frequencies in a range from 78 to 496 Hz and a frequency-dependent range of first-order spindown values down to -7.86×10-8Hz/s at the highest frequency. No gravitational waves were detected. The 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic center are ˜3.35×10-25 for frequencies near 150 Hz. These upper limits are the most constraining to date for a large-parameter-space search for continuous gravitational wave signals.

Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barker, D.; Barnum, S. H.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Bergmann, G.; Berliner, J. M.; Bertolini, A.; Bessis, D.; Betzwieser, J.; Beyersdorf, P. T.; Bhadbhade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Bowers, J.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brannen, C. A.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Colombini, M.; Constancio, M., Jr.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Deleeuw, E.; Deléglise, S.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Dmitry, K.; Donovan, F.; Dooley, K. L.; Doravari, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edwards, M.; Effler, A.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endr?czi, G.; Essick, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farr, B.; Farr, W.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R.; Flaminio, R.; Foley, E.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B.; Hall, E.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Heefner, J.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Horrom, T.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hua, Z.; Huang, V.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Iafrate, J.; Ingram, D. R.

2013-11-01

123

Directing Web Search Engines using a Knowledge Amplification by Structured Expert Randomization is inevitable. 1 Web Search Engines The World Wide Web continues to play an important role in storing environments. Search engines for the general web typically do not really search the World Wide Web directly

Chen, Shu-Ching

124

Direct imaging searches for planets around white dwarf stars

NASA Astrophysics Data System (ADS)

White dwarfs are excellent targets for direct imaging searches for extra-solar planets, since they are up to 10^4 times fainter than their main sequence progenitors, providing a huge gain in the contrast problem. In addition, the orbits of planetary companions that lie beyond the maximum extent of the Red Giant envelope are expected to widen considerably, improving resolution and further encouraging direct detection. We discuss current searches for planetary companions to white dwarfs, including our own “DODO” programme. At the time of writing, no planetary companion to a white dwarf has been detected. The most sensitive searches have been capable of detecting companions ?5M_{Jup}, and their non-detection is consistent with the conclusions of McCarthy & Zuckerman (2004), that no more than 3% of stars harbour 5-10M_{Jup} planets at orbits between 75-300AU. Extremely Large Telescopes are required to enable deeper searches sensitive to lower mass planets, and to provide larger target samples including more distant and older white dwarfs. ELTs will also enable spectroscopic follow-up for any resolved planets, and follow-up of any planetary companions discovered astrometrically by GAIA and SIM.

Burleigh, Matt; Hogan, Emma; Clarke, Fraser

125

A survey of search directions in interior point methods for linear programming

A basic characteristic of an interior point algorithm for linear programming is the search direction. Many papers on interior point algorithms only give an implicit description of the search direction. In this report we derive explicit expressions for the search directions used in many well-known algorithms. Comparing these explicit expressions gives a good insight into the similarities and differences between

Dick Den Hertog; Cees Roos

1991-01-01

126

NASA Astrophysics Data System (ADS)

The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Micha?; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C.; Prodi, G.

2015-02-01

127

An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579

Liu, Lei; Zhao, Jing

2014-01-01

128

Study of genetic direct search algorithms for function optimization

NASA Technical Reports Server (NTRS)

The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

Zeigler, B. P.

1974-01-01

129

A parallel direct numerical simulation of dust particles in a turbulent flow

NASA Astrophysics Data System (ADS)

Due to their effects on radiation transport, aerosols play an important role in the global climate. Mineral dust aerosol is a predominant natural aerosol in the desert and semi-desert regions of the Middle East and North Africa (MENA). The Arabian Peninsula is one of the three predominant source regions on the planet "exporting" dust to almost the entire world. Mineral dust aerosols make up about 50% of the tropospheric aerosol mass and therefore produces a significant impact on the Earth's climate and the atmospheric environment, especially in the MENA region that is characterized by frequent dust storms and large aerosol generation. Understanding the mechanisms of dust emission, transport and deposition is therefore essential for correctly representing dust in numerical climate prediction. In this study we present results of numerical simulations of dust particles in a turbulent flow to study the interaction between dust and the atmosphere. Homogenous and passive dust particles in the boundary layers are entrained and advected under the influence of a turbulent flow. Currently no interactions between particles are included. Turbulence is resolved through direct numerical simulation using a parallel incompressible Navier-Stokes flow solver. Model output provides information on particle trajectories, turbulent transport of dust and effects of gravity on dust motion, which will be used to compare with the wind tunnel experiments at University of Texas at Austin. Results of testing of parallel efficiency and scalability is provided. Future versions of the model will include air-particle momentum exchanges, varying particle sizes and saltation effect. The results will be used for interpreting wind tunnel and field experiments and for improvement of dust generation parameterizations in meteorological models.

Nguyen, H. V.; Yokota, R.; Stenchikov, G.; Kocurek, G.

2012-04-01

130

Direct numerical simulation of instabilities in parallel flow with spherical roughness elements

NASA Technical Reports Server (NTRS)

Results from a direct numerical simulation of laminar flow over a flat surface with spherical roughness elements using a spectral-element method are given. The numerical simulation approximates roughness as a cellular pattern of identical spheres protruding from a smooth wall. Periodic boundary conditions on the domain's horizontal faces simulate an infinite array of roughness elements extending in the streamwise and spanwise directions, which implies the parallel-flow assumption, and results in a closed domain. A body force, designed to yield the horizontal Blasius velocity in the absence of roughness, sustains the flow. Instabilities above a critical Reynolds number reveal negligible oscillations in the recirculation regions behind each sphere and in the free stream, high-amplitude oscillations in the layer directly above the spheres, and a mean profile with an inflection point near the sphere's crest. The inflection point yields an unstable layer above the roughness (where U''(y) is less than 0) and a stable region within the roughness (where U''(y) is greater than 0). Evidently, the instability begins when the low-momentum or wake region behind an element, being the region most affected by disturbances (purely numerical in this case), goes unstable and moves. In compressible flow with periodic boundaries, this motion sends disturbances to all regions of the domain. In the unstable layer just above the inflection point, the disturbances grow while being carried downstream with a propagation speed equal to the local mean velocity; they do not grow amid the low energy region near the roughness patch. The most amplified disturbance eventually arrives at the next roughness element downstream, perturbing its wake and inducing a global response at a frequency governed by the streamwise spacing between spheres and the mean velocity of the most amplified layer.

Deanna, R. G.

1992-01-01

131

Three-dimensional parallel distributed inversion of CSEM data using a direct forward solver

NASA Astrophysics Data System (ADS)

For 3-D inversion of controlled-source electromagnetic (CSEM) data, increasing availability of high-performance computers enables us to apply inversion techniques that are theoretically favourable, yet have previously been considered to be computationally too demanding. We present a newly developed parallel distributed 3-D inversion algorithm for interpreting CSEM data in the frequency domain. Our scheme is based on a direct forward solver and uses Gauss-Newton minimization with explicit formation of the Jacobian. This combination is advantageous, because Gauss-Newton minimization converges rapidly, limiting the number of expensive forward modelling cycles. Explicit calculation of the Jacobian allows us to (i) precondition the Gauss-Newton system, which further accelerates convergence, (ii) determine suitable regularization parameters by comparing matrix norms of data- and model-dependent terms in the objective function and (iii) thoroughly analyse data sensitivities and interdependencies. We show that explicit Jacobian formation in combination with direct solvers is likely to require less memory than combinations of direct solvers and implicit Jacobian usage for many moderate-scale CSEM surveys. We demonstrate the excellent convergence properties of the new inversion scheme for several synthetic models. We compare model updates determined by solving either a system of normal equations or, alternatively, a linear least-squares system. We assess the behaviour of three different stabilizing functionals in the framework of our inversion scheme, and demonstrate that implicit regularization resulting from incomplete iterative solution of the model update equations helps stabilize the inversion. We show inversions of models with up to two million unknowns in the forward solution, which clearly demonstrates applicability of our approach to real-world problems.

Grayver, A. V.; Streich, R.; Ritter, O.

2013-06-01

132

We have investigated the magnetoconductance of semiconducting carbon nanotubes (CNTs) in pulsed, parallel magnetic fields up to 60 T, and report the direct observation of the predicted band-gap closure and the reopening of the gap under variation of the applied magnetic field. We also highlight the important influence of mechanical strain on the magnetoconductance of the CNTs. PMID:21405643

Jhang, S H; Marga?ska, M; Skourski, Y; Preusche, D; Grifoni, M; Wosnitza, J; Strunk, C

2011-03-01

133

No-search algorithm for direction of arrival estimation

NASA Astrophysics Data System (ADS)

Direction of arrival estimation (DOA) is an important problem in ionospheric research and electromagnetics as well as many other fields. When superresolution techniques are used, a computationally expensive search should be performed in general. In this paper, a no-search algorithm is presented. The idea is to separate the source signals in the time-frequency plane by using the Short-Time Fourier Transform. The direction vector for each source is found by coherent summation over the instantaneous frequency (IF) tracks of the individual sources which are found automatically by employing morphological image processing. Both overlapping and nonoverlapping source IF tracks can be processed and identified by the proposed approach. The CLEAN algorithm is adopted in order to isolate the IF tracks of the overlapping sources with different powers. The proposed method is very effective in finding the IF tracks and can be applied for signals with arbitrary IF characteristics. While the proposed method can be applied to any sensor geometry, planar uniform circular arrays (UCA) bring additional advantages. Different properties of the UCA are presented, and it is shown that the DOA angles can be found as the mean-square error optimum solution of a linear matrix equation. Several simulations are done, and it is shown that the proposed approach performs significantly better than the conventional methods.

Tuncer, T. Engin; Ã-Zgen, M. Tankut

2009-10-01

134

Direct Dark Matter Searches with CDMS and XENON

The Cryogenic Dark Matter Search (CDMS) and XENON experiments aim to directly detect dark matter in the form of weakly interacting massive particles (WIMPs) via their elastic scattering on the target nuclei. The experiments use different techniques to suppress background event rates to the minimum, and at the same time, to achieve a high WIMP detection rate. The operation of cryogenic Ge and Si crystals of the CDMS-II experiment in the Soudan mine yielded the most stringent spin-independent WIMP-nucleon cross-section (~10^{-43} cm^2) at a WIMP mass of 60 GeV/c^2. The two-phase xenon detector of the XENON10 experiment is currently taking data in the Gran Sasso underground lab and promising preliminary results were recently reported. Both experiments are expected to increase their WIMP sensitivity by a one order of magnitude in the scheduled science runs for 2007.

Kaixuan Ni; Laura Baudis

2006-11-09

135

A Solver for Massively Parallel Direct Numerical Simulation of Three-Dimensional Multiphase Flows

We present a new solver for massively parallel simulations of fully three-dimensional multiphase flows. The solver runs on a variety of computer architectures from laptops to supercomputers and on 65536 threads or more (limited only by the availability to us of more threads). The code is wholly written by the authors in Fortran 2003 and uses a domain decomposition strategy for parallelization with MPI. The fluid interface solver is based on a parallel implementation of the LCRM hybrid Front Tracking/Level Set method designed to handle highly deforming interfaces with complex topology changes. We discuss the implementation of this interface method and its particular suitability to distributed processing where all operations are carried out locally on distributed subdomains. We have developed parallel GMRES and Multigrid iterative solvers suited to the linear systems arising from the implicit solution of the fluid velocities and pressure in the presence of strong density and viscosity discontinuities across flu...

Shin, S; Juric, D

2014-01-01

136

Direct Spatial Search on Pictorial Databases Using Packed R-Trees

Pictorial databases require efficient and direct spatial search based on the analog form of spatial objects and relationships instead of search based on some cumbersome alphanumeric encodings of the pictures. R-trees (two- dimensional B-trees) are excellent devices for indexing spatial objects and relationships found on pictures. Their most important feature is that they provide high level object oriented search rather

Nick Roussopoulos; Daniel Leifker

1985-01-01

137

We calculated the Faraday rotation of one-dimensional (1-D) magnetic photonic crystals (MPCs), which are based on the dielectric Ti2O3 and Al2O3, and the magnetic Bi:YIG, by employing 4 x 4 transfer-matrix method for the general case that the linearly polarized incident beam is parallel to their periodic direction, as mostly studied for the 1-D MPCs. Furthermore, even for a special

Y. H. Lu; M. D. Huang; S. Y. Park; P. J. Kim; Y. P. Lee; J. Y. Rhee

2006-01-01

138

We have developed a parallel algorithm for microdigital-holographic particle-tracking velocimetry. The algorithm is used in (1) numerical reconstruction of a particle image computer using a digital hologram, and (2) searching for particles. The numerical reconstruction from the digital hologram makes use of the Fresnel diffraction equation and the FFT (fast Fourier transform),whereas the particle search algorithm looks for local maximum graduation in a reconstruction field represented by a 3D matrix. To achieve high performance computing for both calculations (reconstruction and particle search), two memory partitions are allocated to the 3D matrix. In this matrix, the reconstruction part consists of horizontally placed 2D memory partitions on the x-y plane for the FFT, whereas, the particle search part consists of vertically placed 2D memory partitions set along the z axes.Consequently, the scalability can be obtained for the proportion of processor elements,where the benchmarks are carried out for parallel computation by a SGI Altix machine.

Satake, Shin-ichi; Kanamori, Hiroyuki; Kunugi, Tomoaki; Sato, Kazuho; Ito, Tomoyoshi; Yamamoto, Keisuke

2007-02-01

139

NASA Astrophysics Data System (ADS)

The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

2010-05-01

140

Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

NASA Technical Reports Server (NTRS)

The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

1995-01-01

141

A Demand System for a Dynamic Auction Market with Directed Search

A Demand System for a Dynamic Auction Market with Directed Search Matthew Backus Cornell University system for a dynamic auction market with directed search. In each period, heterogeneous goods that the state of the market -- which includes active bidders' types and information sets -- evolves

Chen, Yiling

142

Search Task Performance Using Subtle Gaze Direction with the presence of Distractions

modulation technique called Subtle Gaze Direction (SGD) for guiding the user in a simple searching task. SGD interrupting their visual experience. The goal of SGD is to direct a viewer's gaze to certain regions are returned on a simple search task using SGD, as compared to results returned when no modulation at all

Grimm, Cindy

143

Search Task Performance Using Subtle Gaze Direction with the Presence of Distractions

modulation technique called subtle gaze direction (SGD) for guiding the user in a simple searching task. SGD interrupting their visual experience. The goal of SGD is to direct a viewer's gaze to certain regions are returned on a simple search task using SGD, as compared to results returned when no modulation at all

Bailey, Reynold J.

144

= {Computer Science, Massey University}, year = {2012}, number = {CSTN-162}, address = {Albany, North Shore Ken Hawick, Computer Science, Massey University, Albany, North Shore 102-904, Auckland, New Zealand for Parallel Computing Research (CPC) Computer Science, Massey University North Shore 102-904, Auckland, New

Hawick, Ken

145

J. Parallel Distrib. Comput. 68 (2008) 13891401 Contents lists available at ScienceDirect

, however, single-chip, massively parallel sys- tems such as the NVIDIA GeForce 8 Series GPUs s t r a c t Contemporary many-core processors such as the GeForce 8800 GTX enable application developers on the GeForce 8 Series is not a trivial task. At first glance, it appears to be a multi

Hwu, Wen-mei W.

146

J. Parallel Distrib. Comput. ( ) Contents lists available at ScienceDirect

platforms with application to the Poisson equation Jing Wu , Joseph JaJa Department of Electrical and Computer Engineering and Institute for Advanced Computer Studies, University of Maryland, College Park, MD Parallel and vector implementations CUDA GPU Poisson equations a b s t r a c t We develop optimized multi

JaJa, Joseph F.

147

Direct Imaging Searches with the Apodizing Phase Plate Coronagraph

NASA Astrophysics Data System (ADS)

The sensitivity of direct imaging searches for extrasolar planets is limited by the presence of diffraction rings from the primary star. Coronagraphs are angular filters that minimise these diffraction structures whilst allowing light from faint companions to shine through. The Apodizing Phase Plate (APP; Kenworthy 2007) coronagraph is a simple pupil plane optic that suppresses diffraction over a 180 degree region around each star simultaneously, providing easy beam switching observations and requiring no time consuming optical alignment at the telescope. We will present our results on using the APP at the Very Large Telescope in surveys for extrasolar planets around A/F and debris disk hosting stars in the L' band (3.8 microns) in the Southern Hemisphere, where we reach a contrast of 12 magnitudes at 0.5 arcseconds (Meshkat 2013). In Leiden, we are also developing the next generation of broadband achromatic coronagraphs that can simultaneously image both sides of the star using Vector APPs (Snik 2012, Otten 2012). Recent laboratory results showing the potential of this technology for future ELTs will also be presented.

Kenworthy, M.; Meshkat, T.; Otten, , G.; Codona, J.

2014-03-01

148

Direct Searches for Scalar Leptoquarks at the Run II Tevatron

This dissertation sets new limits on the mass of the scalar leptoquark from direct searches carried out at the Run II CDF detector using data from March 2001 to October 2003. The data analyzed has a total time-integrated measured luminosity of 198 pb{sup -1} of p{bar p} collisions with {radical}s = 1.96 TeV. Leptoquarks are assumed to be pair-produced and to decay into a lepton and a quark of the same generation. They consider two possible leptoquark decays: (1) {beta} = BR(LQ {yields} {mu}q) = 1.0, and (2) {beta} = BR(LQ {yields} {mu}q) = 0.5. For the {beta} = 1 channel, they focus on the signature represented by two isolated high-p{sub T} muons and two isolated high-p{sub T} jets. For the {beta} = 1/2 channel, they focus on the signature represented by one isolated high-p{sub T} muon, large missing transverse energy, and two isolated high-p{sub T} jets. No leptoquark signal is experimentally detected for either signature. Using the next to leading order theoretical cross section for scalar leptoquark production in p{bar p} collisions [1], they set new mass limits on second generation scalar leptoquarks. They exclude the existence of second generation scalar leptoquarks with masses below 221(175) GeV/c{sup 2} for the {beta} = 1(1/2) channels.

Ryan, Daniel E.; /Tufts U.

2004-11-01

149

A Ka-band direct oscillation HBT VCO MMIC with a parallel negative resistor circuit

This paper describes a low phase noise Ka-band VCO MMIC employing InGaP\\/GaAs HBT processes. The VCO has the following two features: a novel circuit comprising negative resistors arranged in parallel that achieves a steep phase slope, and a tuning circuit with two resonators that offers a wide tuning range and steep phase slope. Measurement results of the developed VCO show

Kenichiro Choumei; Takayuki Matsuzuka; Satoshi Suzuki; Satoshi Hamano; Kenji Kawakami; Nobuyuki Ogawa; Makio Komaru; Yoshio Matsuda

2005-01-01

150

A highly scalable simulation code for turbulent flows which solves the fully compressible Navier-Stokes equations is presented. The code, which supports one, two and three dimensional domain decompositions is shown to scale well on up to 262,144 cores. Introducing multiple levels of parallelism based on distributed message passing and shared-memory paradigms results in a reduction of up to 33% of

Shriram Jagannathan; Diego A. Donzis

2012-01-01

151

Characterising dark matter searches at colliders and direct detection experiments: vector mediators

NASA Astrophysics Data System (ADS)

We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: m DM , M med , g DM and g q, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (?5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. Finally, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.

Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; McCabe, Christopher

2015-01-01

152

In this article, we explore the interplay between searches for supersymmetric particles and Higgs bosons at hadron colliders (the Tevatron and the LHC) and direct dark matter searches (such as CDMS, ZEPLIN, XENON, EDELWEISS, CRESST, WARP and others). We focus on collider searches for heavy MSSM Higgs bosons ($A$, $H$, $H^{\\pm}$) and how the prospects for these searches are impacted by direct dark matter limits and vice versa. We find that the prospects of these two experimental programs are highly interrelated. A positive detection of $A$, $H$ or $H^{\\pm}$ at the Tevatron would dramatically enhance the prospects for a near future direct discovery of neutralino dark matter. Similarly, a positive direct detection of neutralino dark matter would enhance the prospects of discovering heavy MSSM Higgs bosons at the Tevatron or the LHC. Combining the information obtained from both types of experimental searches will enable us to learn more about the nature of supersymmetry.

Marcela Carena; Dan Hooper; Alberto Vallinotto

2006-11-06

153

on the Polar satellite provides direct observations of electric field components parallel and perpendicular satellite. This is because uncertainties in the measured parallel field, arising primarily from angular layer or electrostatic shock theories [Block, 1972; Kan, 1975; Swift, 1975; Swift, 1979] predict

California at Berkeley, University of

154

Job Search as Goal-Directed Behavior: Objectives and Methods

ERIC Educational Resources Information Center

This study investigated the relationship between job search objectives (finding a new job/turnover, staying aware of job alternatives, developing a professional network, and obtaining leverage against an employer) and job search methods (looking at job ads, visiting job sites, networking, contacting employment agencies, contacting employers, and…

Van Hoye, Greet; Saks, Alan M.

2008-01-01

155

Obstacles may facilitate and direct DNA search by proteins.

DNA recognition by DNA-binding proteins (DBPs), which is a pivotal event in most gene regulatory processes, is often preceded by an extensive search for the correct site. A facilitated diffusion process in which a DBP combines three-dimensional diffusion in solution with one-dimensional sliding along DNA has been suggested to explain how proteins can locate their target sites on DNA much faster than predicted by three-dimensional diffusion alone. Although experimental and theoretical studies have recently advanced understanding of the biophysical principles underlying the search mechanism, the process under in vivo cellular conditions is poorly understood. In this study, we used various computational approaches to explore how the presence of obstacle proteins on the DNA influences search efficiency. At a low obstacle occupancy (i.e., when few obstacles occupy sites on the DNA), sliding by the searching DBP may be confined, which may impair search efficiency. The obstacles, however, can be bypassed during hopping events, and the number of bypasses is larger for higher obstacle occupancies. Dynamism on the part of the obstacles may even further facilitate search kinetics. Our study shows that the nature and efficiency of the search process may be governed not only by the intrinsic properties of the DBP and the salt concentration of the medium, but also by the in vivo association of DNA with other macromolecular obstacles, their location, and occupancy. PMID:23663847

Marcovitz, Amir; Levy, Yaakov

2013-05-01

156

Obstacles May Facilitate and Direct DNA Search by Proteins

DNA recognition by DNA-binding proteins (DBPs), which is a pivotal event in most gene regulatory processes, is often preceded by an extensive search for the correct site. A facilitated diffusion process in which a DBP combines three-dimensional diffusion in solution with one-dimensional sliding along DNA has been suggested to explain how proteins can locate their target sites on DNA much faster than predicted by three-dimensional diffusion alone. Although experimental and theoretical studies have recently advanced understanding of the biophysical principles underlying the search mechanism, the process under in vivo cellular conditions is poorly understood. In this study, we used various computational approaches to explore how the presence of obstacle proteins on the DNA influences search efficiency. At a low obstacle occupancy (i.e., when few obstacles occupy sites on the DNA), sliding by the searching DBP may be confined, which may impair search efficiency. The obstacles, however, can be bypassed during hopping events, and the number of bypasses is larger for higher obstacle occupancies. Dynamism on the part of the obstacles may even further facilitate search kinetics. Our study shows that the nature and efficiency of the search process may be governed not only by the intrinsic properties of the DBP and the salt concentration of the medium, but also by the in vivo association of DNA with other macromolecular obstacles, their location, and occupancy. PMID:23663847

Marcovitz, Amir; Levy, Yaakov

2013-01-01

157

Pinning down neutralino properties from a possible modulation signal in WIMP direct search

We analyze the properties of neutralino under the hypothesis that some preliminary experimental results of the DAMA/NaI Collaboration may be indicative of a yearly modulation effect. We examine which supersymmetric configurations would be singled out by the DAMA/NaI data. We also discuss the possibility to investigate these configurations by means of experimental searches for relic neutralinos other than direct searches. We finally discuss the possibility to probe these configurations by accelerator searches.

Bottino, A; Fornengo, N; Scopel, S

1998-01-01

158

Pinning down neutralino properties from a possible modulation signal in WIMP direct search

We analyze the properties of neutralino under the hypothesis that some preliminary experimental results of the DAMA/NaI Collaboration may be indicative of a yearly modulation effect. We examine which supersymmetric configurations would be singled out by the DAMA/NaI data. We also discuss the possibility to investigate these configurations by means of experimental searches for relic neutralinos other than direct searches. We finally discuss the possibility to probe these configurations by accelerator searches.

A. Bottino; F. Donato; N. Fornengo; S. Scopel

1997-09-09

159

GOAL-DIRECTED ASR IN A MULTIMEDIA INDEXING AND SEARCHING ENVIRONMENT (MUMIS)

GOAL-DIRECTED ASR IN A MULTIMEDIA INDEXING AND SEARCHING ENVIRONMENT (MUMIS) Mirjam Wester, Judith speech recognition (ASR) within the framework of MUMIS (Multimedia Indexing and Searching Environment present in the material. 1. INTRODUCTION This paper reports on the automatic speech recognition research

Edinburgh, University of

160

Exploiting Multi-level Parallelism for Homology Search using General Purpose Xiandong Meng

for sequence homology database searches. The results show that the classic Smith Waterman sequence alignment method such as the Needleman-Wunsch [16] and Smith-Waterman algorithms [20], provide optimal solutions level of sensitivity for similarity searching at high speed. OSEARCH and SSEARCH [18] are two Smith

Chaudhary, Vipin

161

APHID: Asynchronous Parallel Game-Tree Search Mark G. Brockington and Jonathan Schaeffer

. APHID yields better speedups than synchronous search methods for an Othello and a checkers program-sum games with perfect information, such as chess, Othello1 and checkers, are programmed using the same the search depth and the relative strength of chess, Othello and checkers programs [8]. Thus, programs

Schaeffer, Jonathan

162

Chaining direct memory access data transfer operations for compute nodes in a parallel computer

Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

Archer, Charles J. (Rochester, MN); Blocksome, Michael A. (Rochester, MN)

2010-09-28

163

Oscillation modes of direct current microdischarges with parallel-plate geometry

Two different oscillation modes in microdischarge with parallel-plate geometry have been observed: relaxation oscillations with frequency range between 1.23 and 2.1 kHz and free-running oscillations with 7 kHz frequency. The oscillation modes are induced by increasing power supply voltage or discharge current. For a given power supply voltage, there is a spontaneous transition from one to other oscillation mode and vice versa. Before the transition from relaxation to free-running oscillations, the spontaneous increase of oscillation frequency of relaxation oscillations form 1.3 kHz to 2.1 kHz is measured. Fourier transform spectra of relaxation oscillations reveal chaotic behavior of microdischarges. Volt-ampere (V-A) characteristics associated with relaxation oscillations describes periodical transition between low current, diffuse discharge, and normal glow. However, free-running oscillations appear in subnormal glow only.

Stefanovic, Ilija; Kuschel, Thomas; Winter, Joerg [Institut fuer Experimentalphysik II, Ruhr-Universitaet Bochum, 44781 Bochum (Germany); Skoro, Nikola; Maric, Dragana; Petrovic, Zoran Lj [Institute of Physics, University of Belgrade, POB 68, 11080 Belgrade (Serbia)

2011-10-15

164

NASA Astrophysics Data System (ADS)

We present two sequential and one parallel global optimization codes, that belong to the stochastic class, and an interface routine that enables the use of the Merlin/MCL environment as a non-interactive local optimizer. This interface proved extremely important, since it provides flexibility, effectiveness and robustness to the local search task that is in turn employed by the global procedures. We demonstrate the use of the parallel code to a molecular conformation problem. Program summaryTitle of program: PANMIN Catalogue identifier: ADSU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: PANMIN is designed for UNIX machines. The parallel code runs on either shared memory architectures or on a distributed system. The code has been tested on a SUN Microsystems ENTERPRISE 450 with four CPUs, and on a 48-node cluster under Linux, with both the GNU g77 and the Portland group compilers. The parallel implementation is based on MPI and has been tested with LAM MPI and MPICH Installation: University of Ioannina, Greece Programming language used: Fortran-77 Memory required to execute with typical data: Approximately O( n2) words, where n is the number of variables No. of bits in a word: 64 No. of processors used: 1 or many Has the code been vectorised or parallelized?: Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 147163 No. of lines in distributed program, including the test data, etc.: 14366 Distribution format: gzipped tar file Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global Optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero Method of solution: PANMIN is a suite of programs for Global Optimization that take advantage of the Merlin/MCL optimization environment [1,2]. We offer implementations of two algorithms that belong to the stochastic class and use local searches either as intermediate steps or as solution refinement Restrictions on the complexity of the problem: The only restriction is set by the available memory of the hardware configuration. The software can handle bound constrained problems. The Merlin Optimization environment must be installed. Availability of an MPI installation is necessary for executing the parallel code Typical running time: Depending on the objective function References: [1] D.G. Papageorgiou, I.N. Demetropoulos, I.E. Lagaris, Merlin-3.0. A multidimensional optimization environment, Comput. Phys. Commun. 109 (1998) 227-249. [2] D.G. Papageorgiou, I.N. Demetropoulos, I.E. Lagaris, The Merlin Control Language for strategic optimization, Comput. Phys. Commun. 109 (1998) 250-275.

Theos, F. V.; Lagaris, I. E.; Papageorgiou, D. G.

2004-05-01

165

NASA Astrophysics Data System (ADS)

This anomalies detection approach seeks the directions that maximize the projection index, so as to gain the anomalies structure information. Using genetic algorithm in this approach can search accurate optimal projection directions, but it's a computation-intensive task. So, a parallel algorithm under distributed memory system was presented. The projection directions were searched efficiently by parallel genetic algorithm model, and the projection directions' precision was guaranteed by using a strengthened terminal qualification. Then, the detected anomaly components were wiped off by projecting the data onto the subspace orthogonal to the previous projection directions, and the other anomalies were searched in the residual space. The final task of projection and objects segmentation was also completed in parallel. Using an OMIS hyperspectral data to test the parallel algorithm's performance under an eight-node cluster, the process time reduced from 15 minutes to 2.8 minutes. The results show the validity and comparative good parallel efficiency.

Wu, Ziyan; Sun, Junhua; Liu, Qianzhe; Zhang, Guangjun

2008-10-01

166

Fast String Search on Multicore Processors: Mapping fundamental algorithms onto parallel hardware

String searching is one of these basic algorithms. It has a host of applications, including search engines, network intrusion detection, virus scanners, spam filters, and DNA analysis, among others. The Cell processor, with its multiple cores, promises to speed-up string searching a lot. In this article, we show how we mapped string searching efficiently on the Cell. We present two implementations: • The fast implementation supports a small dictionary size (approximately 100 patterns) and provides a throughput of 40 Gbps, which is 100 times faster than reference implementations on x86 architectures. • The heavy-duty implementation is slower (3.3-4.3 Gbps), but supports dictionaries with tens of thousands of strings.

Scarpazza, Daniele P.; Villa, Oreste; Petrini, Fabrizio

2008-04-01

167

Design and analysis of a nondeterministic parallel breadth-first search algorithm

I have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. My PBFS program on a single processor runs as quickly as a standard C++ breadth-first ...

Schardl, Tao Benjamin

2010-01-01

168

A study of search directions in primal-dual interior-point methods for semidefinite programming

We discuss several different search directions which can be used in primal-dual interior-point methods for semidefinite programming problems and investigate their theoretical properties, including scale invariance, primal-dual symmetry, and whether they always generate well-defined directions. Among the directions satisfying all but at most two of these desirable properties are the Alizadeh-Haeberly-Overton, Helmberg-Rendl-Vanderbei-Wolkowicz\\/Kojima-Shindoh-HaralMonteiro, Nesterov-Todd, Gu, and Toh directions, as well as

Todd M. J

1999-01-01

169

ERIC Educational Resources Information Center

Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

1998-01-01

170

Some Comments on Possible Preferred Directions for the SETI Search

The search for extraterrestrial intelligence by looking for signals from advanced technological civilizations has been ongoing for some decades. We suggest that it could possibly be made more efficient by focusing on stars from which the solar system can be observed via mini-eclipsings of the Sun by transiting planets.

Nussinov, Shmuel

2009-01-01

171

Searching mixed DNA profiles directly against profile databases.

DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. PMID:24528588

Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

2014-03-01

172

NASA Technical Reports Server (NTRS)

Rectenna conversion efficiencies (RF to dc) approximating 85 percent were demonstrated on a small scale, clearly indicating the feasibility and potential of efficiency of microwave power to dc. The overall cost estimates of the solar power satellite indicate that the baseline rectenna subsystem will be between 25 to 40 percent of the system cost. The directional receiving elements and element extensions were studied, along with power combining evaluation and evaluation extensions.

Gutmann, R. J.; Borrego, J. M.

1978-01-01

173

Some New Search Directions for Primal-Dual Interior Point Methods in Semidefinite Programming

Search directions for primal-dual path-following methods for semidefinite programming (SDP) are proposed. These directions have the properties that (1) under certain nondegeneracy and strict complementarity assumptions, the Jacobian matrix of the associated symmetrized Newton equation has bounded condition number along the central path in the limit as the barrier parameter tends to zero; (2) the Schur complement matrix of the

Kim-Chuan Toh

2000-01-01

174

Earthquake Location, Direct, Global-Search Methods E 2449 Earthquake Location,

Earthquake Location, Direct, Global-Search Methods E 2449 Earthquake Location, Direct, Global Kingdom Article Outline Glossary Definition of the Subject Introduction The Earthquake Location Problem or temporal av- erage of some characteristic of an earthquake, such as surface shaking intensity or moment

175

Astrophysical motivation for directed searches for a stochastic gravitational wave background

NASA Astrophysics Data System (ADS)

The nearby Universe is expected to create an anisotropic stochastic gravitational-wave background (SGWB). Different algorithms have been developed and implemented to search for isotropic and anisotropic SGWBs. The aim of this paper is to quantify the advantage of an optimal anisotropic search, specifically comparing a point source with an isotropic background. Clusters of galaxies appear as point sources to a network of ground-based laser-interferometric detectors. The optimal search strategy for these sources is a "directed radiometer search." We show that the flux of SGWBs created by the millisecond pulsars in the Virgo cluster produces a significantly stronger signal than the nearly isotropic background of unresolved sources of the same kind. We compute their strain power spectra for different cosmologies and the distribution of populations over redshifts. We conclude that a localized source, like the Virgo cluster, can be resolved from the isotropic background with very high significance using the directed-search algorithm. For backgrounds dominated by nearby sources, up to a redshift of about 3, we show that the directed search for a localized source can have a signal-to-noise ratio that is greater than that for the all-sky integrated isotropic search.

Mazumder, Nairwita; Mitra, Sanjit; Dhurandhar, Sanjeev

2014-04-01

176

Astrophysical motivation for directed searches for a stochastic gravitational wave background

The nearby universe is expected to create an anisotropic stochastic gravitational wave background (SGWB). Different algorithms have been developed and implemented to search for isotropic and anisotropic SGWB. The aim of this paper is to quantify the advantage of an optimal anisotropic search, specifically comparing a point source with an isotropic background. Clusters of galaxies appear as point sources to a network of ground based laser interferometric detectors. The optimal search strategy for these sources is a "directed radiometer search". We show that the flux of SGWB created by the millisecond pulsars in the Virgo cluster produces a significantly stronger signal than the nearly isotropic background of unresolved sources of the same kind. We compute their strain power spectra for different cosmologies and distribution of population over redshifts. We conclude that a localised source, like the Virgo cluster, can be resolved from the isotropic background with very high significance using the directed search algorithm. For backgrounds dominated by nearby sources, up to redshift of about 3, we show that the directed search for a localised source can have signal to noise ratio more than that for the all sky integrated isotropic search.

Nairwita Mazumder; Sanjit Mitra; Sanjeev Dhurandhar

2014-04-30

177

Computational search for direct band gap silicon crystals

NASA Astrophysics Data System (ADS)

Due to its abundance, silicon is the preferred solar-cell material despite the fact that current silicon materials have indirect band gaps. Although the band gap properties of silicon have been studied intensively, until now, no direct band gap silicon-based material has been found or suggested. We report here the discovery of direct band gap silicon crystals. By using conformational space annealing, we optimize various crystal structures containing multiple (10 to 20) silicon atoms per unit cell so that their electronic structures become direct band gap. Through first-principles calculations, we identify many direct and quasidirect band gap crystal structures, which exhibit excellent photovoltaic efficiency.

Lee, In-Ho; Lee, Jooyoung; Oh, Young Jun; Kim, Sunghyun; Chang, K. J.

2014-09-01

178

Searches for direct stop production within the ATLAS experiment

NASA Astrophysics Data System (ADS)

The ATLAS experiment at the LHC, in conjunction with the discovery of the Higgs boson is looking for signs of physics which go beyond the Standard Model of Electroweak interactions. Among possible theories for physics beyond Standard Model, Supersymmetry seems to be the most promising one. This theory indeed addresses the Standard Model naturalness problem and offers a perfect candidate for the dark matter. Within this scenario the search for a supersymmetric partner of the top quark, called stop, plays a key role. The ATLAS Experiment has developed a dedicated strategy for the discovery of this particle, focusing on achieving a complete coverage of the available parameter space for this particle, based on the combined search for all of its possible decay modes. The results obtained using the complete ATLAS 2012 statistics will be presented, targeting different decay modes and explaining the procedure to obtain the exclusion limits on the existence of a supersymmetric partner of the top quark at the electroweak scale.

Dondero, Paolo; Atlas Collaboration

2014-12-01

179

Flexible job-shop scheduling with parallel variable neighborhood search algorithm

Flexible job-shop scheduling problem (FJSP) is an extension of the classical job-shop scheduling problem. FJSP is NP-hard and mainly presents two difficulties. The first one is to assign each operation to a machine out of a set of capable machines, and the second one deals with sequencing the assigned operations on the machines. This paper proposes a parallel variable neighborhood

M. Yazdani; M. Amiri; M. Zandieh

2010-01-01

180

We describe a deterministic parallel algorithm for linear programming in fixed di- mension d that takes poly(log log n) time in the common concurrent read concurrent write (CRCW) PRAM model and does optimal O(n) work. In the exclusive read exclusive write (EREW) model, the algorithm runs in O(log n · log logd?1 n) time. Our algorithm is based on multidimensional

Martin E. Dyer; Sandeep Sen

2000-01-01

181

Dynamic Data Structures for a Direct Search Algorithm

The DIRECT (DIviding RECTangles) algorithm of Jones, Perttunen, and Stuckman (Journal of Opti- mization Theory and Applications, vol. 79, no. 1, pp. 157-181, 1993), a variant of Lipschitzian methods for bound constrained global optimization, has proved effective even in higher dimensions. However, the performance of a DIRECT implementation in real applications depends on the characteristics of the objective function, the

JIAN HE; LAYNE T. WATSON; NAREN RAMAKRISHNAN; CLIFFORD A. SHAFFER; ALEX VERSTAK; JING JIANG; KYUNG BAE; WILLIAM H. TRANTER

2002-01-01

182

Dynamic Data Structures for a Direct Search Algorithm

The DIRECT (DIviding RECTangles) algorithm of Jones, Perttunen, and Stuckman (Journal of Optimization Theory and Applications, vol. 79, no. 1, pp. 157–181, 1993), a variant of Lipschitzian methods for bound constrained global optimization, has proved effective even in higher dimensions. However, the performance of a DIRECT implementation in real applications depends on the characteristics of the objective function, the problem

Jian He; Layne T. Watson; Naren Ramakrishnan; Clifford A. Shaffer; Alex Verstak; Jing Jiang; Kyung Bae; William H. Tranter

2002-01-01

183

. We are grateful to the referees for their useful comments. We thank Robert Michael Lewis for his partment of Mathematics, University of Manchester, Manchester, M13 9PL, U.K., 1991. [10] R. Hooke and T. A

Torczon, Virginia

184

Parallel effects of memory set activation and search on timing and working memory capacity

Accurately estimating a time interval is required in everyday activities such as driving or cooking. Estimating time is relatively easy, provided a person attends to it. But a brief shift of attention to another task usually interferes with timing. Most processes carried out concurrently with timing interfere with it. Curiously, some do not. Literature on a few processes suggests a general proposition, the Timing and Complex-Span Hypothesis: A process interferes with concurrent timing if and only if process performance is related to complex span. Complex-span is the number of items correctly recalled in order, when each item presented for study is followed by a brief activity. Literature on task switching, visual search, memory search, word generation and mental time travel supports the hypothesis. Previous work found that another process, activation of a memory set in long term memory, is not related to complex-span. If the Timing and Complex-Span Hypothesis is true, activation should not interfere with concurrent timing in dual-task conditions. We tested such activation in single-task memory search task conditions and in dual-task conditions where memory search was executed with concurrent timing. In Experiment 1, activating a memory set increased reaction time, with no significant effect on time production. In Experiment 2, set size and memory set activation were manipulated. Activation and set size had a puzzling interaction for time productions, perhaps due to difficult conditions, leading us to use a related but easier task in Experiment 3. In Experiment 3 increasing set size lengthened time production, but memory activation had no significant effect. Results here and in previous literature on the whole support the Timing and Complex-Span Hypotheses. Results also support a sequential organization of activation and search of memory. This organization predicts activation and set size have additive effects on reaction time and multiplicative effects on percent correct, which was found. PMID:25120502

Schweickert, Richard; Fortin, Claudette; Xi, Zhuangzhuang; Viau-Quesnel, Charles

2014-01-01

185

Parallel effects of memory set activation and search on timing and working memory capacity.

Accurately estimating a time interval is required in everyday activities such as driving or cooking. Estimating time is relatively easy, provided a person attends to it. But a brief shift of attention to another task usually interferes with timing. Most processes carried out concurrently with timing interfere with it. Curiously, some do not. Literature on a few processes suggests a general proposition, the Timing and Complex-Span Hypothesis: A process interferes with concurrent timing if and only if process performance is related to complex span. Complex-span is the number of items correctly recalled in order, when each item presented for study is followed by a brief activity. Literature on task switching, visual search, memory search, word generation and mental time travel supports the hypothesis. Previous work found that another process, activation of a memory set in long term memory, is not related to complex-span. If the Timing and Complex-Span Hypothesis is true, activation should not interfere with concurrent timing in dual-task conditions. We tested such activation in single-task memory search task conditions and in dual-task conditions where memory search was executed with concurrent timing. In Experiment 1, activating a memory set increased reaction time, with no significant effect on time production. In Experiment 2, set size and memory set activation were manipulated. Activation and set size had a puzzling interaction for time productions, perhaps due to difficult conditions, leading us to use a related but easier task in Experiment 3. In Experiment 3 increasing set size lengthened time production, but memory activation had no significant effect. Results here and in previous literature on the whole support the Timing and Complex-Span Hypotheses. Results also support a sequential organization of activation and search of memory. This organization predicts activation and set size have additive effects on reaction time and multiplicative effects on percent correct, which was found. PMID:25120502

Schweickert, Richard; Fortin, Claudette; Xi, Zhuangzhuang; Viau-Quesnel, Charles

2014-01-01

186

A study of search directions in primal-dual interior-point methods for semidefinite programming

We discuss several di#erent search directions which can be used in primal-dualinterior-point methods for semidefinite programming problems and investigate theirtheoretical properties, including scale invariance, primal-dual symmetry, and whetherthey always generate well-defined directions. Among the directions satisfying all but atmost two of these desirable properties are the Alizadeh-Haeberly-Overton, HelmbergRendl-Vanderbei-Wolkowicz\\/Kojima-Shindoh-Hara\\/Monteiro, Nesterov-Todd, Gu, and...

M. J. Todd

1998-01-01

187

Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

2014-08-13

188

Human serum albumin (HSA) or anti-human serum albumin (anti-HSA) yields a catalytic hydrogen wave at about -1.85V (vs Ag/AgCl) in 0.25M NH(3).H(2)O-NH(4)Cl (pH 8.58) buffer. When 1.0 x 10(-2)M K(2)S(2)O(8) is present, the catalytic hydrogen wave is further catalyzed, producing a parallel catalytic wave of hydrogen as catalyst in nature, termed the parallel catalytic hydrogen wave. The sensitivity of the parallel catalytic hydrogen wave is higher by two orders of magnitude than that of the catalytic hydrogen wave. Using the parallel catalytic hydrogen wave of anti-HSA or HSA in the presence of K(2)S(2)O(8), two sensitive methods for the determination of anti-HSA were developed. One is a direct determination based on the parallel catalytic hydrogen wave of anti-HAS itself, and the other is a homogeneous immunoassay based on measuring the decrease of the peak current of the parallel catalytic hydrogen wave of HSA after homogeneous immunoreaction of HSA with anti-HSA. In the direct determination, the second-order derivative peak current of the parallel catalytic hydrogen wave of anti-HSA itself is rectilinear to its titer in the range from 1:1.0 x 10(7) to 1:8.4 x 10(6). In the homogeneous immunoassay, the decrease in the second-order derivative peak current of the parallel catalytic hydrogen wave of HSA is linearly related to the added anti-HSA in the titer range from 1:3.0 x 10(7) to 1:6.0 x 10(6). These assays are highly sensitive and rapid in operation and can be used to evaluate such antigens and their antibodies as those that could yield the parallel catalytic hydrogen wave. PMID:12654307

Song, Jun-Feng; Liu, Yang-Qin; Guo, Wei

2003-03-15

189

Dynamic scaling in the Mesh Adaptive Direct Search algorithm for ...

Mar 31, 2014 ... discusses future research directions. ...... Priority in future research will be given to adapt the Progressive Barrier [7] for ... Journal of Optimization Theory and Applications, 107(2):261–274, 2000. ... sics in Applied Mathematics.

Audet Le Digabel Tribes

2014-03-31

190

NASA Astrophysics Data System (ADS)

We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., ?2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating n>1024 models planetary systems each containing n=4 planets and assuming n=256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

Ford, Eric B.

2009-05-01

191

Experiments on `quantum' search and directed transport in microwave artificial graphene

A series of quantum search algorithms has been proposed recently providing an algebraic speed-up compared to classical search algorithms from $N$ to $\\sqrt{N}$ where $N$ is the number of items in the search space. In particular, devising searches on regular lattices have become popular extending Grover's original algorithm to spatial searching. Working in a tight-binding setup, it could be demonstrated theoretically, that a search is possible in the physically relevant dimensions 2 and 3 if the lattice spectrum possess Dirac points. We present here a proof of principle experiment implementing wave search algorithms and directed wave transport in a graphene lattice arrangement. The idea is based on bringing localized search states in resonance with an extended lattice state in an energy region of low spectral density - namely at or near the Dirac point. The experiment is implemented using classical waves in a microwave setup containing weakly coupled dielectric resonators placed in a honeycomb arrangement, i.e.~artificial graphene. We furthermore investigate the scaling behavior experimentally using linear chains.

Julian Boehm; Matthieu Bellec; Fabrice Mortessagne; Ulrich Kuhl; Sonja Barkhofen; Stefan Gehler; Hans-Juergen Stoeckmann; Iain Foulger; Sven Gnutzman; Gregor Tanner

2014-09-08

192

Microwave Experiments Simulating Quantum Search and Directed Transport in Artificial Graphene

NASA Astrophysics Data System (ADS)

A series of quantum search algorithms have been proposed recently providing an algebraic speedup compared to classical search algorithms from N to ?{N } , where N is the number of items in the search space. In particular, devising searches on regular lattices has become popular in extending Grover's original algorithm to spatial searching. Working in a tight-binding setup, it could be demonstrated, theoretically, that a search is possible in the physically relevant dimensions 2 and 3 if the lattice spectrum possesses Dirac points. We present here a proof of principle experiment implementing wave search algorithms and directed wave transport in a graphene lattice arrangement. The idea is based on bringing localized search states into resonance with an extended lattice state in an energy region of low spectral density—namely, at or near the Dirac point. The experiment is implemented using classical waves in a microwave setup containing weakly coupled dielectric resonators placed in a honeycomb arrangement, i.e., artificial graphene. Furthermore, we investigate the scaling behavior experimentally using linear chains.

Böhm, Julian; Bellec, Matthieu; Mortessagne, Fabrice; Kuhl, Ulrich; Barkhofen, Sonja; Gehler, Stefan; Stöckmann, Hans-Jürgen; Foulger, Iain; Gnutzmann, Sven; Tanner, Gregor

2015-03-01

193

Higgs Coupling Measurements and Direct Searches as Complementary Probes of the pMSSM

The parameter space of the MSSM can be probed via many avenues, such as by pre- cision measurements of the couplings of the ~126 GeV Higgs boson, as well as the direct searches for SUSY partners. We examine the connection between these two collider observables at the LHC and ILC in the 19/20-parameter p(henomenological)MSSM. Within this scenario, we address two questions: (i) How will potentially null direct searches for SUSY at the LHC influence the predicted properties of the lightest SUSY Higgs boson? (ii) What can be learned about the properties of the superpartners from precision measurements of the Higgs boson couplings? In this paper, we examine these questions by employing three different large sets of pMSSM models with either the neutralino or gravitino being the LSP. We make use of the ATLAS direct SUSY searches at the 7/8 TeV LHC as well as expected results from 14 TeV operations, and the anticipated precision measurements of the Higgs Boson couplings at the 14 TeV LHC and at the ILC. We demonstrate that the future Higgs coupling determinations can deeply probe the pMSSM parameter space and, in particular, can observe the effects of models that are projected to evade the direct searches at the 14 TeV LHC with 3 inverse ab of integrated luminosity. In addition, we compare the reach of the Higgs coupling determinations to the direct heavy Higgs searches in the MA - tan beta plane and show that they cover orthogonal regions. This analysis demonstrates the complementarity of the direct and indirect approaches in searching for Supersymmetry, and the importance of precision studies of the properties of the Higgs Boson.

M. Cahill-Rowley; J. Hewett; A. Ismail; T. Rizzo

2014-07-25

194

Lick Observatory Optical SETI: targeted search and new directions.

Lick Observatory's Optical SETI (search for extraterrestrial intelligence) program has been in regular operation for 4.5 years. We have observed 4,605 stars of spectral types F-M within 200 light-years of Earth. Occasionally, we have appended objects of special interest, such as stars with known planetary systems. We have observed 14 candidate signals ("triple coincidences"), all but one of which are explained by transient local difficulties. Additional observations of the remaining candidate have failed to confirm arriving pulse events. We now plan to proceed in a more economical manner by operating in an unattended drift scan mode. Between operational and equipment modifications, efficiency will more than double. PMID:16225433

Stone, R P S; Wright, S A; Drake, F; Muñoz, M; Treffers, R; Werthimer, D

2005-10-01

195

Possible constraints on SUSY-model parameters from direct dark matter search

NASA Astrophysics Data System (ADS)

We consider the SUSY-model neutralino as a dominant Dark Matter (DM) particle in the galactic halo and investigate some general issues of direct DM searches via elastic neutralino-nucleus scattering. On the basis of conventional assumptions about the nuclear and nucleon structure, without referring to a specific SUSY-model, we prove that it is impossible in principle to extract more than three constrains on fundamental supersymmetry (SUSY) - model parameters from the direct Dark Matter searches. Three types of Dark Matter detector probing different groups of parameters are recognized.

Bednyakov, V. A.; Kovalenko, S. G.; Klapdor-Kleingrothaus, H. V.

196

Possible constraints on SUSY-model parameters from direct dark matter search

NASA Astrophysics Data System (ADS)

We consider the SUSY-model neutralino as a dominant Dark Matter particle in the galactic halo and investigate some general issues of direct DM searches via elastic neutralino-nucleus scattering. On the basis of conventional assumptions about the nuclear and nucleon structure, without referring to a specific SUSY-model, we prove that is impimpossible in principle to extract more than three constraints on fundamental SUSY-model parameters from the direct Dark matter searches. Three types of Dark matter detectors probing different groups of parameters are recognized.

Bednyakov, V. A.; Klapdor-Kleingrothaus, H. V.; Kovalenko, S. G.

1994-06-01

197

Precision measurements, dark matter direct detection and LHC Higgs searches in a constrained NMSSM

We reexamine the constrained version of the Next-to-Minimal Supersymmetric Standard Model with semi universal parameters at the GUT scale (CNMSSM). We include constraints from collider searches for Higgs and SUSY particles, upper bound on the relic density of dark matter, measurements of the muon anomalous magnetic moment and of B-physics observables as well as direct searches for dark matter. We then study the prospects for direct detection of dark matter in large scale detectors and comment on the prospects for discovery of heavy Higgs states at the LHC.

G. Belanger; C. Hugonie; A. Pukhov

2008-11-28

198

Directed path-width and monotonicity in digraph searching

Directed path-width was defined by Reed, Thomas and Seymour around 1995. The author and P. Hajnal defined a cops-and-robber game on digraphs in 2000. We prove that the two notions are closely related and for any digraph D, the corresponding graph parameters differ by at most one. The result is

János Barát

2006-01-01

199

An efficient search direction for linear programming problems

In this paper, we present an auxiliary algorithm, in terms of the speed of obtaining the optimal solution, that is e!ective in helping the simplex method for commencing a better initial basic feasible solution. The idea of choosing a direction towards an optimal point presented in this paper is new and easily implemented. From our experiments, the algorithm will release

Hsing Luh; Ray Tsaih

2002-01-01

200

Massively parallel computing and the search for jets and black holes at the LHC

NASA Astrophysics Data System (ADS)

Massively parallel computing at the LHC could be the next leap necessary to reach an era of new discoveries at the LHC after the Higgs discovery. Scientific computing is a critical component of the LHC experiment, including operation, trigger, LHC computing GRID, simulation, and analysis. One way to improve the physics reach of the LHC is to take advantage of the flexibility of the trigger system by integrating coprocessors based on Graphics Processing Units (GPUs) or the Many Integrated Core (MIC) architecture into its server farm. This cutting edge technology provides not only the means to accelerate existing algorithms, but also the opportunity to develop new algorithms that select events in the trigger that previously would have evaded detection. In this paper we describe new algorithms that would allow us to select in the trigger new topological signatures that include non-prompt jet and black hole-like objects in the silicon tracker.

Halyo, V.; LeGresley, P.; Lujan, P.

2014-04-01

201

ERIC Educational Resources Information Center

In this study, the relationship of expressed occupational daydreams and scores on the Self-Directed Search (SDS) were examined. Results were consistent with Holland's theory of careers. Implications for career counselors are discussed. Students were asked to provide specific biographical data (i. e., age, gender, race) and to write down their…

Miller, Mark J.; Springer, Thomas P.; Tobacyk, Jerome; Wells, Don

2004-01-01

202

The Relationships among Constructs in the Career Thoughts Inventory and the Self-Directed Search.

ERIC Educational Resources Information Center

Scores of 81 adults on the Career Thoughts Inventory and the Self-Directed Search (SDS) showed a canonical correlation between the typology structure measured by the SDS and dysfunctional career thoughts. Depending on their dominant SDS score, some people may be more anxious and confused about career decision making than others. (SK)

Wright, Laura K.; Reardon, Robert C.; Peterson, Gary W.; Osborn, Debra S.

2000-01-01

203

Neutralizing Sexist Titles in Holland's Self-Directed Search: What Difference Does It Make?

ERIC Educational Resources Information Center

Sex-role stereotyping in the linguistic structure of Holland's Self-Directed Search (SDS) was examined. A revised SDS was constructed involving the removal of all masculine-toned terminology. The subjects did perceive the two inventories differently, with subjects completing the standard SDS viewing it as slightly less equitable. (Author)

Boyd, Vivian S.

1976-01-01

204

A Factor-Analytic Study of the Construct Validity of Holland's Self-Directed Search Test.

ERIC Educational Resources Information Center

A confirmatory factor analysis provided support for the result that Holland's Self-Directed Search measures six factors: realistic, investigative, artistic, social-enterprising, conventional, and a sixth general interest factor. Generally, the psychological relationship among types confirms the hexagon model proposed by Holland and others.…

Rachman, D.; And Others

1981-01-01

205

Psychometric Properties of the Chinese Self-Directed Search (1994 Edition)

ERIC Educational Resources Information Center

In this study, we (a) examined the measurement equivalence/invariance (ME/I) of the Chinese Self-Directed Search (SDS; 1994 edition) across gender and geographic regions (Mainland China vs. Hong Kong); (b) assessed the construct validity of the Chinese SDS using Widaman's (1985, 1992) MTMM framework; and (c) determined whether vocational interests…

Yang, Weiwei; Lance, Charles E.; Hui, Harry C.

2006-01-01

206

The Influence of Item Response Indecision on the Self-Directed Search

ERIC Educational Resources Information Center

Students (N = 247) responded to Self-Directed Search (SDS) per the standard response format and were also instructed to record a question mark (?) for items about which they were uncertain (item response indecision [IRI]). The initial responses of the 114 participants with a (?) were then reversed and a second SDS summary code was obtained and…

Sampson, James P., Jr.; Shy, Jonathan D.; Hartley, Sarah Lucas; Reardon, Robert C.; Peterson, Gary W.

2009-01-01

207

Combining F-Race and Mesh Adaptive Direct Search for Automatic Algorithm

Combining F-Race and Mesh Adaptive Direct Search for Automatic Algorithm Configuration Zhi Yuan of the ACP, we adopted F-Race, to adaptively allocate the evaluation budgets among a population of candidate configurations. We compare the hybrid of MADS and F-Race (MADS/F-Race) to MADS with cer- tain fixed numbers

Libre de Bruxelles, UniversitÃ©

208

MADS/F-Race: Mesh Adaptive Direct Search Meets F-Race

MADS/F-Race: Mesh Adaptive Direct Search Meets F-Race Zhi Yuan, Thomas StÂ¨utzle, and Mauro with F-Race, a racing method that adaptively allocates an appro- priate number of evaluations to each of MADS and F-Race (MADS/F-Race) and compare it to other ways of defin- ing the number of evaluations

Birattari, Mauro

209

Using the Self-Directed Search: Career Explorer with High-Risk Middle School Students

ERIC Educational Resources Information Center

The Self-Directed Search: Career Explorer was used with 98 (95% African American) high-risk middle school students as part of 14 structured career groups based on Cognitive Information Processing theory. Results and implications are presented on the outcomes of this program.

Osborn, Debra S.; Reardon, Robert C.

2006-01-01

210

Twin Similarities in Holland Types as Shown by Scores on the Self-Directed Search

ERIC Educational Resources Information Center

This study examined the degree of similarity between scores on the Self-Directed Search from one set of identical twins. Predictably, a high congruence score was found. Results from a biographical sheet are discussed as well as implications of the results for career counselors.

Chauvin, Ida; McDaniel, Janelle R.; Miller, Mark J.; King, James M.; Eddlemon, Ondie L. M.

2012-01-01

211

NASA Astrophysics Data System (ADS)

Praise for The Great Beyond "A marvelous book-very clear, very readable. A brilliant introduction to the math and physics of higher dimensions, from Flatland to superstrings. Its greatest strength is a wealth of fascinating historical narrative and anecdote. I enjoyed it enormously." -Ian Stewart, author of Flatterland "A remarkable journey from Plato's cave to the farthest reaches of human thought and scientific knowledge. This mind-boggling book allows readers to dream strange visions of hyperspace, chase light waves, explore Klein's quantum odyssey and Kaluza's cocoon, leap through parallel universes, and grasp the very essence of conscience and cosmos. Buy this book and feed your head." -Clifford Pickover, author of A Passion for Mathematics "Halpern looks with a bemused eye at the wildest ideas currently afoot in physics. He takes us into the personal world of those who relish and explore seemingly outlandish notions, and does it with a light, engaging style." -Gregory Benford, author of Foundation's Fear "An informative, stimulating, and thoughtful presentation at the very frontiers of contemporary physics. It is quite on a par with Brian Greene's The Elegant Universe or his more recent The Fabric of the Cosmos, and as such, deserves to receive wide non-specialist coverage among an intelligent, curious, thinking public." -Professor E. Sheldon, Contemporary Physics

Halpern, Paul

2005-08-01

212

Bayes and present dark matter direct search status

Recently there has been a huge activity in the dark matter direct detection field, with the report of an excess from CoGeNT and CRESST along with the annual modulated signal of DAMA/Libra and the strong exclusion bound from XENON100. We analyse these results within the framework of Bayesian inference and evidence. Indeed Bayesian methods are well suited for marginalizing over experimental systematics and background. We present the results for spin-independent interaction on nucleus with particular attention to the low dark matter mass region and the compatibility between experiments. In the same vein we also investigate the impact of astrophysical uncertainties on the WIMP preferred parameter space within the class of isotropic dark matter velocity distributions.

Chiara Arina

2011-10-03

213

WIMP Dark Matter Direct-Detection Searches in Noble Gases

Cosmological observations and the dynamics of the Milky Way provide ample evidence for an invisible and dominant mass component. This so-called dark matter could be made of new, colour and charge neutral particles, which were non-relativistic when they decoupled from ordinary matter in the early universe. Such weakly interacting massive particles (WIMPs) are predicted to have a non-zero coupling to baryons and could be detected via their collisions with atomic nuclei in ultra-low background, deep underground detectors. Among these, detectors based on liquefied noble gases have demonstrated tremendous discovery potential over the last decade. After briefly introducing the phenomenology of direct dark matter detection, I will review the main properties of liquefied argon and xenon as WIMP targets and discuss sources of background. I will then describe existing and planned argon and xenon detectors that employ the so-called single- and dual-phase detection techniques, addressing their complementarity and science reach.

Laura Baudis

2014-08-19

214

Taming astrophysical bias in direct dark matter searches

We explore systematic biases in the identification of dark matter in future direct detection experiments and compare the reconstructed dark matter properties when assuming a self-consistent dark matter distribution function and the standard Maxwellian velocity distribution. We find that the systematic bias on the dark matter mass and cross-section determination arising from wrong assumptions for its distribution function is of order ? 1?. A much larger systematic bias can arise if wrong assumptions are made on the underlying Milky Way mass model. However, in both cases the bias is substantially mitigated by marginalizing over galactic model parameters. We additionally show that the velocity distribution can be reconstructed in an unbiased manner for typical dark matter parameters. Our results highlight both the robustness of the dark matter mass and cross-section determination using the standard Maxwellian velocity distribution and the importance of accounting for astrophysical uncertainties in a statistically consistent fashion.

Pato, Miguel [Physik-Department T30d, Technische Universität München, James-Franck-Straße, 85748 Garching (Germany); Strigari, Louis E. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305 (United States); Trotta, Roberto [Astrophysics Group and Imperial Centre for Inference and Cosmology, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom); Bertone, Gianfranco, E-mail: miguel.pato@tum.de, E-mail: strigari@stanford.edu, E-mail: r.trotta@imperial.ac.uk, E-mail: gf.bertone@gmail.com [GRAPPA Institute, University of Amsterdam, Science Park 904, 1090 GL Amsterdam (Netherlands)

2013-02-01

215

WIMP dark matter direct-detection searches in noble gases

NASA Astrophysics Data System (ADS)

Cosmological observations and the dynamics of the Milky Way provide ample evidence for an invisible and dominant mass component. This so-called dark matter could be made of new, colour and charge neutral particles, which were non-relativistic when they decoupled from ordinary matter in the early universe. Such weakly interacting massive particles (WIMPs) are predicted to have a non-zero coupling to baryons and could be detected via their collisions with atomic nuclei in ultra-low background, deep underground detectors. Among these, detectors based on liquefied noble gases have demonstrated tremendous discovery potential over the last decade. After briefly introducing the phenomenology of direct dark matter detection, I will review the main properties of liquefied argon and xenon as WIMP targets and discuss sources of background. I will then describe existing and planned argon and xenon detectors that employ the so-called single- and dual-phase detection techniques, addressing their complementarity and science reach.

Baudis, Laura

2014-09-01

216

Direction-sensitive dark matter search results in a surface laboratory

We developed a three-dimensional gaseous tracking device and performed a direction-sensitive dark matter search in a surface laboratory. By using 150 Torr carbon-tetrafluoride (CF_4 gas), we obtained a sky map drawn with the recoil directions of the carbon and fluorine nuclei, and set the first limit on the spin-dependent WIMP (Weakly Interacting Massive Particles)-proton cross section by a direction-sensitive method. Thus, we showed that a WIMP-search experiment with a gaseous tracking device can actually set limits. Furthermore, we demonstrated that this method will potentially play a certain role in revealing the nature of dark matter when a low-background large-volume detector is developed.

Miuchi, Kentaro; Kabuki, Shigeto; Kubo, Hidetoshi; Kurosawa, Shunsuke; Nishimura, Hironobu; Okada, Yoko; Takada, Atsushi; Tanimori, Toru; Tsuchiya, Ken'ichi; Ueno, Kazuki; Sekiya, Hiroyuki; Takeda, Atsushi

2007-01-01

217

Direction-sensitive dark matter search results in a surface laboratory

We developed a three-dimensional gaseous tracking device and performed a direction-sensitive dark matter search in a surface laboratory. By using 150 Torr carbon-tetrafluoride (CF_4 gas), we obtained a sky map drawn with the recoil directions of the carbon and fluorine nuclei, and set the first limit on the spin-dependent WIMP (Weakly Interacting Massive Particles)-proton cross section by a direction-sensitive method. Thus, we showed that a WIMP-search experiment with a gaseous tracking device can actually set limits. Furthermore, we demonstrated that this method will potentially play a certain role in revealing the nature of dark matter when a low-background large-volume detector is developed.

Kentaro Miuchi; Kaori Hattori; Shigeto Kabuki; Hidetoshi Kubo; Shunsuke Kurosawa; Hironobu Nishimura; Yoko Okada; Atsushi Takada; Toru Tanimori; Ken'ichi Tsuchiya; Kazuki Ueno; Hiroyuki Sekiya; Atsushi Takeda

2007-08-20

218

A search for energetic ion directivity in large solar flares

NASA Technical Reports Server (NTRS)

One of the key observational questions for solar flare physics is: What is the number, the energy spectrum, and the angular distribution of flare accelerated ions? The standard method for deriving ion spectral shape employs the ratio of influences observed on the 4-7 MeV band to the narrow neutron capture line at 2.223 MeV. The 4-7 MeV band is dominated by the principal nuclear de-excitation lines from C-12 and O-16 which are generated in the low chromosphere by the direct excitation or spallation of nuclei by energetic ions. In contrast, the narrow 2.223 MeV line is produced by the capture of thermal neutrons on protons in the photosphere. These capture neutrons are generated by energetic ion interactions and thermalized by scattering in the solar atmosphere. In a series of papers, Ramaty, Lingenfelter, and their collaborators have calculated the expected ratio of fluence in the 4-7 MeV band to the 2.223 MeV line for a wide range of energetic ion spectral shapes (see, e.g. Hua and Lingenfelter 1987). Another technique for deriving ion spectral shapes and angular distributions uses the relative strength of the Compton tail associated with the 2.223 MeV neutron capture line (Vestrand 1988, 1990). This technique can independently constrain both the angular and the energy distribution of the energetic parent ions. The combination of this tail/line strength diagnostic with the line/(4-7) MeV fluence ratio can allow one to constrain both properties of the energetic ion distributions. The primary objective of our Solar Maximum Mission (SMM) guest investigator program was to study measurements of neutron capture line emission and prompt nuclear de-excitation for large flares detected by the Solar Maximum Mission/ Gamma-Ray Spectrometer (SMM/GRS) and to use these established line diagnostics to study the properties of flare accelerated ions.

Vestrand, W. Thomas

1993-01-01

219

We show that starting with the addition law of parallel speeds derived as a consequence of the invariance of the speed of light, the Lorentz transformations for the space-time coordinates can be derived.

Bernhard Rothenstein

2007-09-14

220

The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16?×?2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16?×?2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16?×?2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process. PMID:24188092

2013-01-01

221

Directional Search for Isospin-Violating Dark Matter with Nuclear Emulsion

Some of direct dark matter searches reported not only positive signals but also annual modulation of the signal event. However, the parameter spaces have been excluded by other experiments. Isospin violating dark matter solves the contradiction by supposing different coupling to proton and neutron. We study the possibility to test the favored parameter region by isospin violating dark matter model with the future detector of dark matter using the nuclear emulsion. Since the nuclear emulsion detector has directional sensitivity, the detector is expected to examine whether the annual modulations observed other experiments is caused by dark matter or background signals.

Keiko I. Nagao; Tatsuhiro Naka

2012-05-22

222

arXiv:hep-ph/0412300v121Dec2004 Neutrinoless double beta decay and direct searches for neutrino mass

arXiv:hep-ph/0412300v121Dec2004 Neutrinoless double beta decay and direct searches for neutrino (Dated: December 22, 2004) Study of the neutrinoless double beta decay and searches for the manifestation and recommendations on them. Â· Observation of the neutrinoless double-beta decay (0) would prove that the total lepton

Piepke, Andreas G.

223

Density matrix search using direct inversion in the iterative subspace as a linear scaling alternative to diagonalization in electronic structure calculations Xiaosong Li Department of Chemistry, Wayne of the key bottlenecks. Density matrix search methods provide an efficient linear scaling approach

Schlegel, H. Bernhard

224

GPU-enabled parallel processing for image halftoning applications

Programmable Graphics Processing Unit (GPU) has emerged as a powerful parallel processing architecture for various applications requiring a large amount of CPU cycles. In this paper, we study the feasibility for using this architecture for image halftoning, in particular implementing computationally intensive neighborhood halftoning algorithms such as error diffusion and Direct Binary Search (DBS). We show that it is possible

Barry Trager; Chai Wah Wu; Mikel Stanich; Kartheek Chandu

2011-01-01

225

Parallel acquisition in mobile DS-CDMA systems

This paper presents the performance of a direct sequence spread spectrum acquisition scheme in a mobile terrestrial communications system. The effects of fading, multipath, power control, shadowing, multiple access interference, out-of-cell interference, vehicle speed, voice activity, and sectorization are examined. The acquisition scheme uses noncoherent detection and a parallel search strategy. The analysis is done for the reverse link of

Roland R. Rick; Laurence B. Milstein

1997-01-01

226

Search for direct CP violation in singly Cabibbo-suppressed D±?K+K-?± decays

NASA Astrophysics Data System (ADS)

We report on a search for direct CP violation in the singly Cabibbo-suppressed decay D+?K+K-?+ using a data sample of 476fb-1 of e+e- annihilation data accumulated with the BABAR detector at the SLAC PEP-II electron-positron collider, running at and just below the energy of the ?(4S) resonance. The integrated CP-violating decay rate asymmetry ACP is determined to be (0.37±0.30±0.15)%. Model-independent and model-dependent Dalitz plot analysis techniques are used to search for CP-violating asymmetries in the various intermediate states. We find no evidence for CP-violation asymmetry.

Lees, J. P.; Poireau, V.; Tisserand, V.; Garra Tico, J.; Grauges, E.; Palano, A.; Eigen, G.; Stugu, B.; Brown, D. N.; Kerth, L. T.; Kolomensky, Yu. G.; Lynch, G.; Koch, H.; Schroeder, T.; Asgeirsson, D. J.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; So, R. Y.; Khan, A.; Blinov, V. E.; Buzykaev, A. R.; Druzhinin, V. P.; Golubev, V. B.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Todyshev, K. Yu.; Yushkov, A. N.; Bondioli, M.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; Atmacan, H.; Gary, J. W.; Liu, F.; Long, O.; Vitug, G. M.; Campagnari, C.; Hong, T. M.; Kovalskyi, D.; Richman, J. D.; West, C. A.; Eisner, A. M.; Kroseberg, J.; Lockman, W. S.; Martinez, A. J.; Schumm, B. A.; Seiden, A.; Chao, D. S.; Cheng, C. H.; Echenard, B.; Flood, K. T.; Hitlin, D. G.; Ongmongkolkul, P.; Porter, F. C.; Rakitin, A. Y.; Andreassen, R.; Huard, Z.; Meadows, B. T.; Sokoloff, M. D.; Sun, L.; Bloom, P. C.; Ford, W. T.; Gaz, A.; Nauenberg, U.; Smith, J. G.; Wagner, S. R.; Ayad, R.; Toki, W. H.; Spaan, B.; Schubert, K. R.; Schwierz, R.; Bernard, D.; Verderi, M.; Clark, P. J.; Playfer, S.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Fioravanti, E.; Garzia, I.; Luppi, E.; Piemontese, L.; Santoro, V.; Baldini-Ferroli, R.; Calcaterra, A.; de Sangro, R.; Finocchiaro, G.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Rama, M.; Zallo, A.; Contri, R.; Guido, E.; Lo Vetere, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Bhuyan, B.; Prasad, V.; Lee, C. L.; Morii, M.; Edwards, A. J.; Adametz, A.; Uwer, U.; Lacker, H. M.; Lueck, T.; Dauncey, P. D.; Mallik, U.; Chen, C.; Cochran, J.; Meyer, W. T.; Prell, S.; Rubin, A. E.; Gritsan, A. V.; Guo, Z. J.; Arnaud, N.; Davier, M.; Derkach, D.; Grosdidier, G.; Le Diberder, F.; Lutz, A. M.; Malaescu, B.; Roudeau, P.; Schune, M. H.; Stocchi, A.; Wormser, G.; Lange, D. J.; Wright, D. M.; Chavez, C. A.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Hutchcroft, D. E.; Payne, D. J.; Touramanis, C.; Bevan, A. J.; Di Lodovico, F.; Sacco, R.; Sigamani, M.; Cowan, G.; Brown, D. N.; Davis, C. L.; Denig, A. G.; Fritsch, M.; Gradl, W.; Griessinger, K.; Hafner, A.; Prencipe, E.; Barlow, R. J.; Jackson, G.; Lafferty, G. D.; Behn, E.; Cenci, R.; Hamilton, B.; Jawahery, A.; Roberts, D. A.; Dallapiccola, C.; Cowan, R.; Dujmic, D.; Sciolla, G.; Cheaib, R.; Lindemann, D.; Patel, P. M.; Robertson, S. H.; Biassoni, P.; Neri, N.; Palombo, F.; Stracka, S.; Cremaldi, L.; Godang, R.; Kroeger, R.; Sonnek, P.; Summers, D. J.; Nguyen, X.; Simard, M.; Taras, P.; De Nardo, G.; Monorchio, D.; Onorato, G.; Sciacca, C.; Martinelli, M.; Raven, G.; Jessop, C. P.; LoSecco, J. M.; Wang, W. F.; Honscheid, K.; Kass, R.; Brau, J.; Frey, R.; Sinev, N. B.; Strom, D.; Torrence, E.; Feltresi, E.; Gagliardi, N.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simi, G.; Simonetto, F.; Stroili, R.; Akar, S.; Ben-Haim, E.; Bomben, M.; Bonneaud, G. R.; Briand, H.; Calderini, G.; Chauveau, J.; Hamon, O.; Leruste, Ph.; Marchiori, G.; Ocariz, J.; Sitt, S.; Biasini, M.; Manoni, E.; Pacetti, S.; Rossi, A.; Angelini, C.; Batignani, G.; Bettarini, S.; Carpinelli, M.; Casarosa, G.; Cervelli, A.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Oberhof, B.; Paoloni, E.; Perez, A.; Rizzo, G.; Walsh, J. J.; Lopes Pegna, D.; Olsen, J.; Smith, A. J. S.; Telnov, A. V.; Anulli, F.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Li Gioi, L.; Mazzoni, M. A.; Piredda, G.; Bünger, C.; Grünberg, O.; Hartmann, T.; Leddig, T.; Voß, C.; Waldi, R.; Adye, T.; Olaiya, E. O.; Wilson, F. F.; Emery, S.; Hamel de Monchenault, G.; Vasseur, G.; Yèche, Ch.; Aston, D.; Bard, D. J.; Bartoldus, R.; Benitez, J. F.; Cartaro, C.; Convery, M. R.; Dorfan, J.; Dubois-Felsmann, G. P.; Dunwoodie, W.; Ebert, M.; Field, R. C.; Franco Sevilla, M.; Fulsom, B. G.; Gabareen, A. M.; Graham, M. T.; Grenier, P.; Hast, C.; Innes, W. R.; Kelsey, M. H.; Kim, P.; Kocian, M. L.; Leith, D. W. G. S.; Lewis, P.; Lindquist, B.; Luitz, S.; Luth, V.; Lynch, H. L.; MacFarlane, D. B.; Muller, D. R.; Neal, H.; Nelson, S.; Perl, M.; Pulliam, T.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Snyder, A.; Su, D.; Sullivan, M. K.; Va'vra, J.; Wagner, A. P.; Wisniewski, W. J.; Wittgen, M.; Wright, D. H.; Wulsin, H. W.; Young, C. C.; Ziegler, V.; Park, W.; Purohit, M. V.; White, R. M.; Wilson, J. R.; Randle-Conde, A.; Sekula, S. J.; Bellis, M.; Burchat, P. R.; Miyashita, T. S.; Puccio, E. M. T.; Alam, M. S.; Ernst, J. A.; Gorodeisky, R.; Guttman, N.; Peimer, D. R.; Soffer, A.; Lund, P.; Spanier, S. M.; Ritchie, J. L.; Ruland, A. M.; Schwitters, R. F.; Wray, B. C.; Izen, J. M.; Lou, X. C.; Bianchi, F.; Gamba, D.; Zambito, S.; Lanceri, L.; Vitale, L.; Martinez-Vidal, F.; Oyanguren, A.; Villanueva-Perez, P.; Ahmed, H.

2013-03-01

227

Majorana dark matter with a coloured mediator: collider vs direct and indirect searches

NASA Astrophysics Data System (ADS)

We investigate the signatures at the Large Hadron Collider of a minimal model where the dark matter particle is a Majorana fermion that couples to the Standard Model via one or several coloured mediators. We emphasize the importance of the production channel of coloured scalars through the exchange of a dark matter particle in the t-channel, and perform a dedicated analysis of searches for jets and missing energy for this model. We find that the collider constraints are highly competitive compared to direct detection, and can even be considerably stronger over a wide range of parameters. We also discuss the complementarity with searches for spectral features at gamma-ray telescopes and comment on the possibility of several coloured mediators, which is further constrained by flavour observables.

Garny, Mathias; Ibarra, Alejandro; Rydbeck, Sara; Vogl, Stefan

2014-06-01

228

Towards Direct Detection of WIMPs with the Cryogenic Dark Matter Search

NASA Astrophysics Data System (ADS)

The Cryogenic Dark Matter Search (CDMS) is carrying out a direct detection search for Weakly Interacting Massive Particles (WIMPs), one of the favored candidates for dark matter. Our latest data has placed some of the most stringent limits on the WIMP-nucleon spin-independent cross section of 6.6×10-44 cm2 for 60 GeV WIMPs at the 90% confidence level. This paper describes our experiment and our latest results; the status of SuperCDMS Soudan, a new expriment at the Soudan mine in Minnesota that will achieve a sensitivity of 5×10-45 cm2 our plans for SuperCDMS SNOLAB, a 100 kg experiment with a projected sensitivity of 3×10-46 cm2 and GEODM, a ton-scale experiment at DUSEL with a projected sensitivity of 2×10-47 cm2.

Figueroa-Feliciano, E.

2010-02-01

229

Light neutralino dark matter: direct/indirect detection and collider searches

NASA Astrophysics Data System (ADS)

We study the neutralino being the Lightest Supersymmetric Particle (LSP) as a cold Dark Matter (DM) candidate with a mass less than 40 GeV in the framework of the Next-to-Minimal-Supersymmetric-Standard-Model (NMSSM). We find that with the current collider constraints from LEP, the Tevatron and the LHC, there are three types of light DM solutions consistent with the direct/indirect searches as well as the relic abundance considerations: ( i) A 1, H 1-funnels, ( ii) stau coannihilation and ( iii) sbottom coannihilation. Type-( i) may take place in any theory with a light scalar (or pseudo-scalar) near the LSP pair threshold; while Type-( ii) and ( iii) could occur in the framework of Minimal-Supersymmetric-Standard-Model (MSSM) as well. We present a comprehensive study on the properties of these solutions and point out their immediate relevance to the experiments of the underground direct detection such as superCDMS and LUX/LZ, and the astro-physical indirect search such as Fermi-LAT. We also find that the decays of the SM-like Higgs boson may be modified appreciably and the new decay channels to the light SUSY particles may be sizable. The new light CP-even and CP-odd Higgs bosons will decay to a pair of LSPs as well as other observable final states, leading to interesting new Higgs phenomenology at colliders. For the light sfermion searches, the signals would be very challenging to observe at the LHC given the current bounds. However, a high energy and high luminosity lepton collider, such as the ILC, would be able to fully cover these scenarios by searching for events with large missing energy plus charged tracks or displaced vertices.

Han, Tao; Liu, Zhen; Su, Shufang

2014-08-01

230

DPL : Data Parallel Library Manual

IntroductionIn [PP93] we described a transformational approach to realizing architecture independent parallel executionof a high-level parallel language. Detailed in that document is a series of steps that when applied to highlevel,data-parallel programs written in the Proteus programming language yield parallel execution on avariety of different parallel architectures. The Data Parallel Library (DPL) directly supports Proteus bysupplying a vital link in

Daniel W. Palmer

1994-01-01

231

Comparison of a constraint directed search to a genetic algorithm in a scheduling application

Scheduling plutonium containers for blending is a time-intensive operation. Several constraints must be taken into account; including the number of containers in a dissolver run, the size of each dissolver run, and the size and target purity of the blended mixture formed from these runs. Two types of algorithms have been used to solve this problem: a constraint directed search and a genetic algorithm. This paper discusses the implementation of these two different approaches to the problem and the strengths and weaknesses of each algorithm.

Abbott, L.

1993-04-01

232

A directed search for continuous Gravitational Waves from the Galactic Center

We present the results of a directed search for continuous gravitational waves from unknown, isolated neutron stars in the Galactic Center region, performed on two years of data from LIGO's fifth science run from two LIGO detectors. The search uses a semi-coherent approach, analyzing coherently 630 segments, each spanning 11.5 hours, and then incoherently combining the results of the single segments. It covers gravitational wave frequencies in a range from 78 to 496 Hz and a frequency-dependent range of first order spindown values down to -7.86 x 10^-8 Hz/s at the highest frequency. No gravitational waves were detected. We place 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic Center. Placing 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic Center, we reach ~3.35x10^-25 for frequencies near 150 Hz. These upper limits are the most constraining to date for a large-parameter-space search for continuous gravitational wave signals.

The LIGO Scientific Collaboration; The Virgo Collaboration; J. Aasi; J. Abadie; B. P. Abbott; R. Abbott; T. Abbott; M. R. Abernathy; T. Accadia; F. Acernese; C. Adams; T. Adams; R. X. Adhikari; C. Affeldt; M. Agathos; N. Aggarwal; O. D. Aguiar; P. Ajith; B. Allen; A. Allocca; E. Amador Ceron; D. Amariutei; R. A. Anderson; S. B. Anderson; W. G. Anderson; K. Arai; M. C. Araya; C. Arceneaux; J. Areeda; S. Ast; S. M. Aston; P. Astone; P. Aufmuth; C. Aulbert; L. Austin; B. E. Aylott; S. Babak; P. T. Baker; G. Ballardin; S. W. Ballmer; J. C. Barayoga; D. Barker; S. H. Barnum; F. Barone; B. Barr; L. Barsotti; M. Barsuglia; M. A. Barton; I. Bartos; R. Bassiri; A. Basti; J. Batch; J. Bauchrowitz; Th. S. Bauer; M. Bebronne; B. Behnke; M. Bejger; M. G. Beker; A. S. Bell; C. Bell; I. Belopolski; G. Bergmann; J. M. Berliner; A. Bertolini; D. Bessis; J. Betzwieser; P. T. Beyersdorf; T. Bhadbhade; I. A. Bilenko; G. Billingsley; J. Birch; M. Bitossi; M. A. Bizouard; E. Black; J. K. Blackburn; L. Blackburn; D. Blair; M. Blom; O. Bock; T. P. Bodiya; M. Boer; C. Bogan; C. Bond; F. Bondu; L. Bonelli; R. Bonnand; R. Bork; M. Born; S. Bose; L. Bosi; J. Bowers; C. Bradaschia; P. R. Brady; V. B. Braginsky; M. Branchesi; C. A. Brannen; J. E. Brau; J. Breyer; T. Briant; D. O. Bridges; A. Brillet; M. Brinkmann; V. Brisson; M. Britzger; A. F. Brooks; D. A. Brown; D. D. Brown; F. Brückner; T. Bulik; H. J. Bulten; A. Buonanno; D. Buskulic; C. Buy; R. L. Byer; L. Cadonati; G. Cagnoli; J. Calderón Bustillo; E. Calloni; J. B. Camp; P. Campsie; K. C. Cannon; B. Canuel; J. Cao; C. D. Capano; F. Carbognani; L. Carbone; S. Caride; A. Castiglia; S. Caudill; M. Cavagliá; F. Cavalier; R. Cavalieri; G. Cella; C. Cepeda; E. Cesarini; R. Chakraborty; T. Chalermsongsak; S. Chao; P. Charlton; E. Chassande-Mottin; X. Chen; Y. Chen; A. Chincarini; A. Chiummo; H. S. Cho; J. Chow; N. Christensen; Q. Chu; S. S. Y. Chua; S. Chung; G. Ciani; F. Clara; D. E. Clark; J. A. Clark; F. Cleva; E. Coccia; P. -F. Cohadon; A. Colla; M. Colombini; M. Constancio Jr; A. Conte; R. Conte; D. Cook; T. R. Corbitt; M. Cordier; N. Cornish; A. Corsi; C. A. Costa; M. W. Coughlin; J. -P. Coulon; S. Countryman; P. Couvares; D. M. Coward; M. Cowart; D. C. Coyne; K. Craig; J. D. E. Creighton; T. D. Creighton; S. G. Crowder; A. Cumming; L. Cunningham; E. Cuoco; K. Dahl; T. Dal Canton; M. Damjanic; S. L. Danilishin; S. D'Antonio; K. Danzmann; V. Dattilo; B. Daudert; H. Daveloza; M. Davier; G. S. Davies; E. J. Daw; R. Day; T. Dayanga; R. De Rosa; G. Debreczeni; J. Degallaix; W. Del Pozzo; E. Deleeuw; S. Deléglise; T. Denker; T. Dent; H. Dereli; V. Dergachev; R. DeRosa; R. DeSalvo; S. Dhurandhar; L. Di Fiore; A. Di Lieto; I. Di Palma; A. Di Virgilio; M. Díaz; A. Dietz; K. Dmitry; F. Donovan; K. L. Dooley; S. Doravari; M. Drago; R. W. P. Drever; J. C. Driggers; Z. Du; J. -C. Dumas; S. Dwyer; T. Eberle; M. Edwards; A. Effler; P. Ehrens; J. Eichholz; S. S. Eikenberry; G. Endröczi; R. Essick; T. Etzel; K. Evans; M. Evans; T. Evans; M. Factourovich; V. Fafone; S. Fairhurst; Q. Fang; B. Farr; W. Farr; M. Favata; D. Fazi; H. Fehrmann; D. Feldbaum; I. Ferrante; F. Ferrini; F. Fidecaro; L. S. Finn; I. Fiori; R. Fisher; R. Flaminio; E. Foley; S. Foley; E. Forsi; L. A. Forte; N. Fotopoulos; J. -D. Fournier; S. Franco; S. Frasca; F. Frasconi; M. Frede; M. Frei; Z. Frei; A. Freise; R. Frey; T. T. Fricke; P. Fritschel; V. V. Frolov; M. -K. Fujimoto; P. Fulda; M. Fyffe; J. Gair; L. Gammaitoni; J. Garcia; F. Garufi; N. Gehrels; G. Gemme; E. Genin; A. Gennai; L. Gergely; S. Ghosh; J. A. Giaime; S. Giampanis; K. D. Giardina; A. Giazotto; S. Gil-Casanova; C. Gill; J. Gleason; E. Goetz; R. Goetz; L. Gondan; G. González; N. Gordon; M. L. Gorodetsky; S. Gossan; S. Goßler; R. Gouaty; C. Graef; P. B. Graff; M. Granata; A. Grant; S. Gras; C. Gray; R. J. S. Greenhalgh; A. M. Gretarsson; C. Griffo; H. Grote; K. Grover; S. Grunewald; G. M. Guidi; C. Guido; K. E. Gushwa; E. K. Gustafson; R. Gustafson; B. Hall; E. Hall; D. Hammer; G. Hammond; M. Hanke; J. Hanks; C. Hanna; J. Hanson; J. Harms; G. M. Harry; I. W. Harry; E. D. Harstad; M. T. Hartman; K. Haughian; K. Hayama; J. Heefner; A. Heidmann; M. Heintze; H. Heitmann; P. Hello; G. Hemming; M. Hendry; I. S. Heng; A. W. Heptonstall; M. Heurs; S. Hild; D. Hoak; K. A. Hodge; K. Holt; M. Holtrop; T. Hong; S. Hooper; T. Horrom; D. J. Hosken; J. Hough; E. J. Howell; Y. Hu; Z. Hua; V. Huang; E. A. Huerta; B. Hughey; S. Husa; S. H. Huttner; M. Huynh; T. Huynh-Dinh; J. Iafrate; D. R. Ingram; R. Inta; T. Isogai; A. Ivanov; B. R. Iyer; K. Izumi; M. Jacobson; E. James; H. Jang; Y. J. Jang; P. Jaranowski; F. Jiménez-Forteza; W. W. Johnson; D. Jones; D. I. Jones; R. Jones; R. J. G. Jonker; L. Ju; Haris K; P. Kalmus; V. Kalogera; S. Kandhasamy; G. Kang; J. B. Kanner; M. Kasprzack; R. Kasturi; E. Katsavounidis; W. Katzman; H. Kaufer; K. Kaufman; K. Kawabe

2013-09-27

233

A directed search for gravitational waves from Scorpius X-1 with initial LIGO

We present results of a search for continuously-emitted gravitational radiation, directed at the brightest low-mass X-ray binary, Scorpius X-1. Our semi-coherent analysis covers 10 days of LIGO S5 data ranging from 50-550 Hz, and performs an incoherent sum of coherent $\\mathcal{F}$-statistic power distributed amongst frequency-modulated orbital sidebands. All candidates not removed at the veto stage were found to be consistent with noise at a 1% false alarm rate. We present Bayesian 95% confidence upper limits on gravitational-wave strain amplitude using two different prior distributions: a standard one, with no a priori assumptions about the orientation of Scorpius X-1; and an angle-restricted one, using a prior derived from electromagnetic observations. Median strain upper limits of 1.3e-24 and 8e-25 are reported at 150 Hz for the standard and angle-restricted searches respectively. This proof of principle analysis was limited to a short observation time by unknown effects of accretion on the intrinsic spin frequency of the neutron star, but improves upon previous upper limits by factors of ~1.4 for the standard, and 2.3 for the angle-restricted search at the sensitive region of the detector.

The LIGO Scientific Collaboration; the Virgo Collaboration; J. Aasi; B. P. Abbott; R. Abbott; T. Abbott; M. R. Abernathy; F. Acernese; K. Ackley; C. Adams; T. Adams; T. Adams; P. Addesso; R. X. Adhikari; V. Adya; C. Affeldt; M. Agathos; K. Agatsuma; N. Aggarwal; O. D. Aguiar; A. Ain; P. Ajith; A. Alemic; B. Allen; A. Allocca; D. Amariutei; S. B. Anderson; W. G. Anderson; K. Arai; M. C. Araya; C. Arceneaux; J. S. Areeda; G. Ashton; S. Ast; S. M. Aston; P. Astone; P. Aufmuth; C. Aulbert; B. E. Aylott; S. Babak; P. T. Baker; F. Baldaccini; G. Ballardin; S. W. Ballmer; J. C. Barayoga; M. Barbet; S. Barclay; B. C. Barish; D. Barker; F. Barone; B. Barr; L. Barsotti; M. Barsuglia; J. Bartlett; M. A. Barton; I. Bartos; R. Bassiri; A. Basti; J. C. Batch; Th. S. Bauer; C. Baune; V. Bavigadda; B. Behnke; M. Bejger; C. Belczynski; A. S. Bell; C. Bell; M. Benacquista; J. Bergman; G. Bergmann; C. P. L. Berry; D. Bersanetti; A. Bertolini; J. Betzwieser; S. Bhagwat; R. Bhandare; I. A. Bilenko; G. Billingsley; J. Birch; S. Biscans; M. Bitossi; C. Biwer; M. A. Bizouard; J. K. Blackburn; L. Blackburn; C. D. Blair; D. Blair; S. Bloemen; O. Bock; T. P. Bodiya; M. Boer; G. Bogaert; P. Bojtos; C. Bond; F. Bondu; L. Bonelli; R. Bonnand; R. Bork; M. Born; V. Boschi; Sukanta Bose; C. Bradaschia; P. R. Brady; V. B. Braginsky; M. Branchesi; J. E. Brau; T. Briant; D. O. Bridges; A. Brillet; M. Brinkmann; V. Brisson; A. F. Brooks; D. A. Brown; D. D. Brown; N. M. Brown; S. Buchman; A. Buikema; T. Bulik; H. J. Bulten; A. Buonanno; D. Buskulic; C. Buy; L. Cadonati; G. Cagnoli; J. Calderón Bustillo; E. Calloni; J. B. Camp; K. C. Cannon; J. Cao; C. D. Capano; F. Carbognani; S. Caride; S. Caudill; M. Cavaglià; F. Cavalier; R. Cavalieri; G. Cella; C. Cepeda; E. Cesarini; R. Chakraborty; T. Chalermsongsak; S. J. Chamberlin; S. Chao; P. Charlton; E. Chassande-Mottin; Y. Chen; A. Chincarini; A. Chiummo; H. S. Cho; M. Cho; J. H. Chow; N. Christensen; Q. Chu; S. Chua; S. Chung; G. Ciani; F. Clara; J. A. Clark; F. Cleva; E. Coccia; P. -F. Cohadon; A. Colla; C. Collette; M. Colombini; L. Cominsky; M. Constancio, Jr.; A. Conte; D. Cook; T. R. Corbitt; N. Cornish; A. Corsi; C. A. Costa; M. W. Coughlin; J. -P. Coulon; S. Countryman; P. Couvares; D. M. Coward; M. J. Cowart; D. C. Coyne; R. Coyne; K. Craig; J. D. E. Creighton; T. D. Creighton; J. Cripe; S. G. Crowder; A. Cumming; L. Cunningham; E. Cuoco; C. Cutler; K. Dahl; T. Dal Canton; M. Damjanic; S. L. Danilishin; S. D'Antonio; K. Danzmann; L. Dartez; V. Dattilo; I. Dave; H. Daveloza; M. Davier; G. S. Davies; E. J. Daw; R. Day; D. DeBra; G. Debreczeni; J. Degallaix; M. De Laurentis; S. Deléglise; W. Del Pozzo; T. Denker; T. Dent; H. Dereli; V. Dergachev; R. De Rosa; R. T. DeRosa; R. DeSalvo; S. Dhurandhar; M. Díaz; L. Di Fiore; A. Di Lieto; I. Di Palma; A. Di Virgilio; G. Dojcinoski; V. Dolique; E. Dominguez; F. Donovan; K. L. Dooley; S. Doravari; R. Douglas; T. P. Downes; M. Drago; J. C. Driggers; Z. Du; M. Ducrot; S. Dwyer; T. Eberle; T. Edo; M. Edwards; M. Edwards; A. Effler; H. -B. Eggenstein; P. Ehrens; J. Eichholz; S. S. Eikenberry; R. Essick; T. Etzel; M. Evans; T. Evans; M. Factourovich; V. Fafone; S. Fairhurst; X. Fan; Q. Fang; S. Farinon; B. Farr; W. M. Farr; M. Favata; M. Fays; H. Fehrmann; M. M. Fejer; D. Feldbaum; I. Ferrante; E. C. Ferreira; F. Ferrini; F. Fidecaro; I. Fiori; R. P. Fisher; R. Flaminio; J. -D. Fournier; S. Franco; S. Frasca; F. Frasconi; Z. Frei; A. Freise; R. Frey; T. T. Fricke; P. Fritschel; V. V. Frolov; S. Fuentes-Tapia; P. Fulda; M. Fyffe; J. R. Gair; L. Gammaitoni; S. Gaonkar; F. Garufi; A. Gatto; N. Gehrels; G. Gemme; B. Gendre; E. Genin; A. Gennai; L. Á. Gergely; S. Ghosh; J. A. Giaime; K. D. Giardina; A. Giazotto; J. Gleason; E. Goetz; R. Goetz; L. Gondan; G. González; N. Gordon; M. L. Gorodetsky; S. Gossan; S. Goßler; R. Gouaty; C. Gräf; P. B. Graff; M. Granata; A. Grant; S. Gras; C. Gray; R. J. S. Greenhalgh; A. M. Gretarsson; P. Groot; H. Grote; S. Grunewald; G. M. Guidi; C. J. Guido; X. Guo; K. Gushwa; E. K. Gustafson; R. Gustafson; J. Hacker; E. D. Hall; G. Hammond; M. Hanke; J. Hanks; C. Hanna; M. D. Hannam; J. Hanson; T. Hardwick; J. Harms; G. M. Harry; I. W. Harry; M. Hart; M. T. Hartman; C. -J. Haster; K. Haughian; S. Hee; A. Heidmann; M. Heintze; G. Heinzel; H. Heitmann; P. Hello; G. Hemming; M. Hendry; I. S. Heng; A. W. Heptonstall; M. Heurs; M. Hewitson; S. Hild; D. Hoak; K. A. Hodge; D. Hofman; S. E. Hollitt; K. Holt; P. Hopkins; D. J. Hosken; J. Hough; E. Houston; E. J. Howell; Y. M. Hu; E. Huerta; B. Hughey; S. Husa; S. H. Huttner; M. Huynh; T. Huynh-Dinh; A. Idrisy; N. Indik; D. R. Ingram; R. Inta; G. Islas; J. C. Isler; T. Isogai; B. R. Iyer; K. Izumi; M. Jacobson; H. Jang; P. Jaranowski; S. Jawahar; Y. Ji; F. Jiménez-Forteza; W. W. Johnson; D. I. Jones; R. Jones; R. J. G. Jonker; L. Ju; Haris K; V. Kalogera

2014-12-01

234

Directed search for gravitational waves from Scorpius X-1 with initial LIGO data

NASA Astrophysics Data System (ADS)

We present results of a search for continuously emitted gravitational radiation, directed at the brightest low-mass x-ray binary, Scorpius X-1. Our semicoherent analysis covers 10 days of LIGO S5 data ranging from 50-550 Hz, and performs an incoherent sum of coherent F -statistic power distributed amongst frequency-modulated orbital sidebands. All candidates not removed at the veto stage were found to be consistent with noise at a 1% false alarm rate. We present Bayesian 95% confidence upper limits on gravitational-wave strain amplitude using two different prior distributions: a standard one, with no a priori assumptions about the orientation of Scorpius X-1; and an angle-restricted one, using a prior derived from electromagnetic observations. Median strain upper limits of 1.3 ×10-24 and 8 ×10-25 are reported at 150 Hz for the standard and angle-restricted searches respectively. This proof-of-principle analysis was limited to a short observation time by unknown effects of accretion on the intrinsic spin frequency of the neutron star, but improves upon previous upper limits by factors of ˜1.4 for the standard, and 2.3 for the angle-restricted search at the sensitive region of the detector.

Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Alemic, A.; Allen, B.; Allocca, A.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J. S.; Arnaud, N.; Ashton, G.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barbet, M.; Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Bartlett, J.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bauer, Th. S.; Baune, C.; Bavigadda, V.; Behnke, B.; Bejger, M.; Belczynski, C.; Bell, A. S.; Bell, C.; Benacquista, M.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biscans, S.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bojtos, P.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchman, S.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C.; Colombini, M.; Cominsky, L.; Constancio, M.; Conte, A.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, C.; Dahl, K.; Canton, T. Dal; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dartez, L.; Dattilo, V.; Dave, I.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Dominguez, E.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S.; Eberle, T.; Edo, T.; Edwards, M.; Edwards, M.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Essick, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fuentes-Tapia, S.; Fulda, P.; Fyffe, M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S.; Garufi, F.; Gatto, A.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; Gergely, L. Á.; Germain, V.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Gräf, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guido, C. J.; Guo, X.; Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J.; Hall, E. D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M.; Heinzel, G.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Hopkins, P.

2015-03-01

235

NASA Astrophysics Data System (ADS)

The direct numerical simulation (DNS) offers the most accurate approach to modeling the behavior of a physical system, but carries an enormous computation cost. There exists a need for an accurate DNS to model the coupled solid-fluid system seen in targeted drug delivery (TDD), nanofluid thermal energy storage (TES), as well as other fields where experiments are necessary, but experiment design may be costly. A parallel DNS can greatly reduce the large computation times required, while providing the same results and functionality of the serial counterpart. A D2Q9 lattice Boltzmann method approach was implemented to solve the fluid phase. The use of domain decomposition with message passing interface (MPI) parallelism resulted in an algorithm that exhibits super-linear scaling in testing, which may be attributed to the caching effect. Decreased performance on a per-node basis for a fixed number of processes confirms this observation. A multiscale approach was implemented to model the behavior of nanoparticles submerged in a viscous fluid, and used to examine the mechanisms that promote or inhibit clustering. Parallelization of this model using a masterworker algorithm with MPI gives less-than-linear speedup for a fixed number of particles and varying number of processes. This is due to the inherent inefficiency of the master-worker approach. Lastly, these separate simulations are combined, and two-way coupling is implemented between the solid and fluid.

Sloan, Gregory James

236

beta(2)-microglobulin (beta(2)m) is a 99-residue protein with an immunoglobulin fold that forms beta-sheet-rich amyloid fibrils in dialysis-related amyloidosis. Here the environment and accessibility of side chains within amyloid fibrils formed in vitro from beta(2)m with a long straight morphology are probed by site-directed spin labeling and accessibility to modification with N-ethyl maleimide using 19 site-specific cysteine variants. Continuous wave electron paramagnetic resonance spectroscopy of these fibrils reveals a core predominantly organized in a parallel, in-register arrangement, by contrast with other beta(2)m aggregates. A continuous array of parallel, in-register beta-strands involving most of the polypeptide sequence is inconsistent with the cryoelectron microscopy structure, which reveals an architecture based on subunit repeats. To reconcile these data, the number of spins in close proximity required to give rise to spin exchange was determined. Systematic studies of a model protein system indicated that juxtaposition of four spin labels is sufficient to generate exchange narrowing. Combined with information about side-chain mobility and accessibility, we propose that the amyloid fibrils of beta(2)m consist of about six beta(2)m monomers organized in stacks with a parallel, in-register array. The results suggest an organization more complex than the accordion-like beta-sandwich structure commonly proposed for amyloid fibrils. PMID:20335170

Ladner, Carol L; Chen, Min; Smith, David P; Platt, Geoffrey W; Radford, Sheena E; Langen, Ralf

2010-05-28

237

A parallel trajectory optimization tool for aerospace plane guidance

NASA Technical Reports Server (NTRS)

A parallel trajectory optimization algorithm is being developed. One possible mission is to provide real-time, on-line guidance for the National Aerospace Plane. The algorithm solves a discrete-time problem via the augmented Lagrangian nonlinear programming algorithm. The algorithm exploits the dynamic programming structure of the problem to achieve parallelism in calculating cost functions, gradients, constraints, Jacobians, Hessian approximations, search directions, and merit functions. Special additions to the augmented Lagrangian algorithm achieve robust convergence, achieve (almost) superlinear local convergence, and deal with constraint curvature efficiency. The algorithm can handle control and state inequality constraints such as angle-of-attack and dynamic pressure constraints. Portions of the algorithm have been tested. The nonlinear programming core algorithm performs well on a variety of static test problems and on an orbit transfer problem. The parallel search direction algorithm can reduce wall clock time by a factor of 10 for this part of the computation task.

Psiaki, Mark L.; Park, Kihong

1991-01-01

238

ERIC Educational Resources Information Center

Tested whether differences in the responses of gifted female adolescents (N=284) on Holland's Self Directed Search (SDS) occur as a function of gender schema. Results indicated that SDS scores varied as a function of gender group, but the direction of group differences did not support gender schema theory. (LLL)

Hollinger, Constance L.

1984-01-01

239

We present the results of an analysis of data recorded at the Pierre Auger Observatory in which we search for groups of directionally-aligned events (or ‘multiplets’) which exhibit a correlation between arrival direction and the inverse of the energy. These signatures are expected from sets of events coming from the same source after having been deflected by intervening coherent magnetic

P. Abreu; M. Aglietta; E. J. Ahn; I. F. M. Albuquerque; D. Allard; I. Allekotte; J. Allen; P. Allison; J. Alvarez Castillo; M. Ambrosio; A. Aminaei; L. Anchordoqui; S. Andringa; T. Anti?i?; A. Anzalone; C. Aramo; E. Arganda; F. Arqueros; H. Asorey; P. Assis; J. Aublin; M. Avenier; G. Avila; T. Bäcker; M. Balzer; K. B. Barber; A. F. Barbosa; R. Bardenet; S. L. C. Barroso; B. Baughman; J. Bäuml; J. J. Beatty; B. R. Becker; K. H. Becker; A. Bellétoile; J. A. Bellido; S. BenZvi; C. Berat; X. Bertou; P. L. Biermann; P. Billoir; F. Blanco; M. Blanco; H. Blümer; M. Bohá?ová; D. Boncioli; C. Bonifazi; R. Bonino; N. Borodai; J. Brack; P. Brogueira; W. C. Brown; R. Bruijn; P. Buchholz; A. Bueno; R. E. Burton; K. S. Caballero-Mora; L. Caramete; R. Caruso; A. Castellina; O. Catalano; G. Cataldi; L. Cazon; R. Cester; J. Chauvin; S. H. Cheng; A. Chiavassa; J. A. Chinellato; J. Chudoba; R. W. Clay; M. R. Coluccia; F. Contreras; H. Cook; M. J. Cooper; A. Cordier; S. Coutu; C. E. Covault; A. Creusot; J. Cronin; A. Curutiu; S. Dagoret-Campagne; R. Dallier; S. Dasso; K. Daumiller; B. R. Dawson; R. M. de Almeida; C. De Donato; S. J. de Jong; G. De La Vega; I. De Mitri; V. de Souza; K. D. de Vries; G. Decerprit; L. del Peral; M. del Río; O. Deligny; H. Dembinski; N. Dhital; C. Di Giulio; J. C. Diaz; M. L. Díaz Castro; P. N. Diep; C. Dobrigkeit; W. Docters; J. C. D’Olivo; P. N. Dong; A. Dorofeev; J. C. dos Anjos; M. T. Dova; D. D’Urso; I. Dutan; J. Ebr; R. Engel; M. Erdmann; C. O. Escobar; J. Espadanal; A. Etchegoyen; P. Facal San Luis; I. Fajardo Tapia; H. Falcke; G. Farrar; A. C. Fauth; N. Fazzini; A. P. Ferguson; A. Ferrero; B. Fick; A. Filevich; S. Fliescher; C. E. Fracchiolla; U. Fröhlich; B. Fuchs; R. Gaior; R. F. Gamarra; S. Gambetta; B. García; D. García Gámez; A. Gascon; H. Gemmeke; K. Gesterling; P. L. Ghia; U. Giaccari; M. Giller; H. Glass; M. S. Gold; G. Golup; F. Gomez Albarracin; M. Gómez Berisso; P. Gonçalves; D. Gonzalez; J. G. Gonzalez; B. Gookin; D. Góra; A. Gorgi; P. Gouffon; S. R. Gozzini; E. Grashorn; S. Grebe; N. Griffith; M. Grigat; A. F. Grillo; Y. Guardincerri; F. Guarino; G. P. Guedes; A. Guzman; J. D. Hague; P. Hansen; D. Harari; S. Harmsma; J. L. Harton; A. Haungs; T. Hebbeker; D. Heck; A. E. Herve; C. Hojvat; N. Hollon; V. C. Holmes; P. Homola; J. R. Hörandel; A. Horneffer; M. Hrabovský; T. Huege; A. Insolia; F. Ionita; A. Italiano; C. Jarne; S. Jiraskova; M. Josebachuili; K. Kadija; K. H. Kampert; P. Karhan; B. Kégl; B. Keilhauer; A. Keivani; J. L. Kelley; E. Kemp; R. M. Kieckhafer; H. O. Klages; M. Kleifges; J. Kleinfeller; D.-H. Koang; K. Kotera; N. Krohm; O. Krömer; D. Kruppke-Hansen; F. Kuehn; D. Kuempel; J. K. Kulbartz; N. Kunka; G. La Rosa; C. Lachaud; P. Lautridou; M. S. A. B. Leão; D. Lebrun; P. Lebrun; M. A. Leigui de Oliveira; A. Letessier-Selvon; I. Lhenry-Yvon; K. Link; R. López; A. Lopez Agüera; K. Louedec; J. Lozano Bahilo; L. Lu; A. Lucero; M. Ludwig; H. Lyberis; M. C. Maccarone; S. Maldera; D. Mandat; P. Mantsch; A. G. Mariazzi; J. Marin; V. Marin; I. C. Maris; H. R. Marquez Falcon; G. Marsella; D. Martello; L. Martin; H. Martinez; O. Martínez Bravo; H. J. Mathes; J. A. J. Matthews; G. Matthiae; D. Maurizio; P. O. Mazur; G. Medina-Tanco; M. Melissas; D. Melo; E. Menichetti; A. Menshikov; P. Mertsch; C. Meurer; S. Mi?anovi?; M. I. Micheletti; W. Miller; L. Miramonti; L. Molina-Bueno; S. Mollerach; M. Monasor; D. Monnier Ragaigne; F. Montanet; B. Morales; C. Morello; E. Moreno; J. C. Moreno; M. Mostafá; C. A. Moura; S. Mueller; M. A. Muller; G. Müller; M. Münchmeyer; R. Mussa; G. Navarra; J. L. Navarro; S. Navas; P. Necesal; L. Nellen; A. Nelles; J. Neuser; P. T. Nhung; L. Niemietz; N. Nierstenhoefer; D. Nitz; D. Nosek; L. Nožka; M. Nyklicek; J. Oehlschläger; A. Olinto; P. Oliva; V. M. Olmos-Gilbaja; N. Pacheco; D. Pakk Selmi-Dei; M. Palatka; J. Pallotta; N. Palmieri; G. Parente; E. Parizot; A. Parra; R. D. Parsons; S. Pastor; T. Paul; M. Pech; R. Pelayo; I. M. Pepe; L. Perrone; R. Pesce; E. Petermann; S. Petrera; P. Petrinca; A. Petrolini; Y. Petrov; J. Petrovic; C. Pfendner; N. Phan; R. Piegaia; T. Pierog; P. Pieroni; M. Pimenta; V. Pirronello; M. Platino; V. H. Ponce; M. Pontz; P. Privitera; M. Prouza; E. J. Quel; S. Querchfeld; J. Rautenberg; O. Ravel; D. Ravignani; B. Revenu; J. Ridky; S. Riggi; M. Risse; P. Ristori; H. Rivera; V. Rizi; J. Roberts; C. Robledo; W. Rodrigues de Carvalho; G. Rodriguez; J. Rodriguez Martino; I. Rodriguez-Cabo; M. D. Rodríguez-Frías; G. Ros; J. Rosado; T. Rossler; M. Roth; B. Rouillé-d’Orfeuil; E. Roulet; A. C. Rovero; F. Salamida; H. Salazar; G. Salina; F. Sánchez; C. E. Santo; E. M. Santos; F. Sarazin; B. Sarkar; S. Sarkar; R. Sato; N. Scharf; V. Scherini; H. Schieler; P. Schiffer; A. Schmidt; F. Schmidt; O. Scholten; H. Schoorlemmer

2011-01-01

240

Parallel algorithm development

Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

Adams, T.F.

1996-06-01

241

A Direct Dark Matter Search with the MAJORANA Low-Background Broad Energy Germanium Detector

NASA Astrophysics Data System (ADS)

It is well established that a significant portion of our Universe is comprised of invisible, non-luminous matter, commonly referred to as dark matter. The detection and characterization of this missing matter is an active area of research in cosmology and particle astrophysics. A general class of candidates for non-baryonic particle dark matter is weakly interacting massive particles (WIMPs). WIMPs emerge naturally from supersymmetry with predicted masses between 1--1000 GeV. There are many current and near-future experiments that may shed light on the nature of dark matter by directly detecting WIMP-nucleus scattering events. The MAJORANA experiment will use p-type point contact (PPC) germanium detectors as both the source and detector to search for neutrinoless double-beta decay in 76Ge. These detectors have both exceptional energy resolution and low-energy thresholds. The low-energy performance of PPC detectors, due to their low-capacitance point-contact design, makes them suitable for direct dark matter searches. As a part of the research and development efforts for the MAJORANA experiment, a custom Canberra PPC detector has been deployed at the Kimballton Underground Research Facility in Ripplemead, Virginia. This detector has been used to perform a search for low-mass (< 10 GeV) WIMP induced nuclear recoils using a 221.49 live-day exposure. It was found that events originating near the surface of the detector plague the signal region, even after all cuts. For this reason, only an upper limit on WIMP induced nuclear recoils was placed. This limit is inconsistent with several recent claims to have observed light WIMP based dark matter.

Finnerty, Padraic Seamus

242

NASA Astrophysics Data System (ADS)

In the South Tibetan Himalaya, two major detachment systems are exposed in the Ama Drime and Mount Everest Massifs. These structures represent a fundamental shift in the dynamics of the Himalayan orogen, recording a progression from south-directed to orogen-parallel mid-crustal flow and exhumation. The South Tibetan detachment system (STDS) accommodated exhumation of the Greater Himalayan series (GHS) until the Middle Miocene. A relatively narrow mylonite zone that progressed into a brittle detachment accommodated exhumation of the GHS. Northward, in the down-dip direction (Dzakaa Chu and Doya La), a 1-km-wide distributed zone of deformation that lacks a foliation-parallel brittle detachment characterizes the STDS. Leucogranites in the footwall of the STDS range between 17-18 Ma. Previously published 40Ar/39Ar ages suggest that movement on the STDS ended by ~ 16 Ma in Rongbuk Valley and ~ 13 Ma near Dinggye. This once continuous section of the STDS is displaced by the trans- Himalayan Ama Drime Massif and Xainza-Dinggye graben. Two oppositely dipping normal faults and shear zones that bound the Ama Drime Massif record orogen-parallel extension. During exhumation, deformation was partitioned into relatively narrow (100-300-m-thick) mylonite zones that progressed into brittle faults/detachments, which offset Quaternary deposits. U(-Th-)Pb geochronology of mafic lenses suggests that the core of the ADM reached granulite facies at ~ 15 Ma. Leucogranites in the footwall of the detachment faults range between 12-11 Ma: significantly younger than those related to movement on the STDS. Previously published 40Ar/39Ar ages from the eastern limb of the Ama Drime Massif suggest that exhumation progressed into the footwall of the Nyüonno detachment between ~ 13-10 Ma. (U-Th)/He apatite ages record a minimum exhumation rate of ~ 1mm/yr between 1.5-3.0 Ma that was enhanced by focused denudation in the trans-Himalayan Arun River gorge. Together these bracket the timing (~ 12 Ma) of a transition from south-directed to orogen-parallel mid-crustal flow and associated graben formation and exhumation along the southern margin of the Tibetan Plateau.

Jessup, M. J.; Cottle, J. M.; Newell, D. L.; Berger, A. L.; Spotila, J. A.

2008-12-01

243

The bibliography contains citations concerning a concept in computers called Massively Parallel Processing. The processing power of a computer may be increased by using numerous processors in parallel and feeding data through a number of different computational paths at the same time. The citations explore these computers and their practical uses, and include case studies, specific problems solved, theory, and future possibilities and needs. Applications of neural network modeling, pattern recognition, image processing, local area routing, and genetic sequence comparison are discussed. (Contains 250 citations and includes a subject term index and title list.)

Not Available

1993-10-01

244

The bibliography contains citations concerning a concept in computers called Massively Parallel Processing. The processing power of a computer may be increased by using numerous processors in parallel and feeding data through a number of different computational paths at the same time. The citations explore these computers and their practical uses, and include case studies, specific problems solved, theory, and future possibilities and needs. Applications of neural network modeling, pattern recognition, image processing, local area routing, and genetic sequence comparison are discussed. (Contains 250 citations and includes a subject term index and title list.)

Not Available

1993-06-01

245

ERIC Educational Resources Information Center

In 2004, a professional delegation of multicultural educators visited the People's Republic of China to explore how diversity issues are addressed and how students are prepared for entry into the international workforce. The delegation, sponsored by the People to People Ambassador Programs, observed numerous parallels to the American system of…

Carjuzaa, Jioanna; Fenimore-Smith, J. Kay; Fuller, Ethlyn Davis; Howe, William A.; Kugler, Eileen; London, Arcenia P.; Ruiz, Ivette; Shin, Barbara

2008-01-01

246

Search for a Direct Large-Cluster-Transfer Process in the C-12,c-13(ne-20,a) Reaction

PHYSICAL REVIEW C VOLUME 32, NUMBER 6 DECEMBER 1985 Search for a direct large-cluster-transfer process in the ' C( Ne, a) reaction T. Murakami, ' N. Takahashi, Y.-W. Lui, E. Takada, ' D. M. Tanner, ~ R. E. Tribble, E. Ungricht, "' and K. Nagatani... Cyclotron Institute, Texas 3 4 M University, College Station, Texas 77843 (Received 7 August 1985) The C(20Ne, o.) reactions were measured at E= 140.2 MeV in order to search for a direct ' 0- cluster-transfer process. In the reaction with the C target...

Murakami, T.; Takahashi, N.; Lui, YW; Takada, E.; Tanner, D. M.; Tribble, Robert E.; Ungricht, E.; Nagatani, K.

1985-01-01

247

NASA Astrophysics Data System (ADS)

The characteristics of a prototype massively parallel electron beam direct writing (MPEBDW) system are demonstrated. The electron optics consist of an emitter array, a micro-electro-mechanical system (MEMS) condenser lens array, auxiliary lenses, a stigmator, three-stage deflectors to align and scan the parallel beams, and an objective lens acting as a reduction lens. The emitter array produces 10000 programmable 10 ?m square beams. The electron emitter is a nanocrystalline silicon (nc-Si) ballistic electron emitter array integrated with an active matrix driver LSI for high-speed emission current control. Because the LSI also has a field curvature correction function, the system can use a large electron emitter array. In this system, beams that are incident on the outside of the paraxial region of the reduction lens can also be used through use of the optical aberration correction functions. The exposure pattern is stored in the active matrix LSI's memory. Alignment between the emitter array and the condenser lens array is performed by moving the emitter stage that slides along the x- and y-axes, and rotates around the z-theta axis. The electrons of all beams are accelerated, and pass through the anode array. The stigmator and the two-stage deflectors perform fine adjustments to the beam positions. The other deflector simultaneously scans all parallel beams to synchronize the moving target stage. Exposure is carried out by moving the target stage that holds the wafer. The reduction lens focuses all beams on the target wafer surface, and the electron optics of the column reduces the electron image to 0.1% of its original size.

Kojima, A.; Ikegami, N.; Yoshida, T.; Miyaguchi, H.; Muroyama, M.; Nishino, H.; Yoshida, S.; Sugata, M.; Ohyi, H.; Koshida, N.; Esashi, M.

2014-03-01

248

Hybrid halftoning using direct multi-bit search (DMS) screen algorithm

NASA Astrophysics Data System (ADS)

In this paper we propose a mathematical framework for multi-bit aperiodic clustered dot halftoning based on the Direct Multi-bit Search (DMS) algorithm. A pixel validation map is provided to the DMS algorithm to guide the formation of homogeneous clusters. The DMS algorithm operates without any user defined guidance, iteratively choosing the best drop absorptance level. An array of valid pixels is computed after each iteration that restricts the selection of pixels available to the DMS algorithm, improving the dot clustering. This process is repeated throughout the entire range of gray levels to create a visually pleasing multi-bit halftone screen. The resultant mask exhibits smoother appearance and improved detail rendering, compared to conventional clustered dot halftoning. Much of the improvements originate from the improved sampling of the aperiodic hybrid screen designs.

Chandu, Kartheek; Stanich, Mikel; Wu, Chai Wah; Trager, Barry

2014-01-01

249

Prospects of dark matter direct search under deep sea water in India

NASA Astrophysics Data System (ADS)

There is compelling evidence from cosmological and astrophysical observations that about one quarter of the energy density of the universe can be attributed to cold dark matter (CDM), whose nature and properties are still unknown. Around the world large numbers of experiments are using different techniques of dark matter direct and indirect detections. According to their experimental requirements location of the experiment prefer to use either underground, under ice, or under sea water. Country like India, digging underground cavern and long tunnel is not very convenient. Therefore, authors look from the either solutions of this problem preferring to use deep sea water. In this article, we discuss the pros and corns of use of deep sea water in the dark matter search.

Singh, V.; Subrahmanyam, V. S.; Singh, L.; Singh, M. K.; Sharma, V.; Chouhan, N. S.; Jaiswal, M. K.; Soma, A. K.

2013-04-01

250

Assuming the lightest neutralino solely composes the cosmic dark matter, we examine the constraints of the CDMS-II and XENON100 dark matter direct searches on the parameter space of the MSSM Higgs sector. We find that the current CDMS-II/XENON100 limits can exclude some of the parameter space which survive the constraints from the dark matter relic density and various collider experiments. We also find that in the currently allowed parameter space, the charged Higgs boson is hardly accessible at the LHC for an integrated luminosity of 30 fb^{-1}, while the neutral non-SM Higgs bosons (H,A) may be accessible in some allowed region characterized by a large \\mu. The future XENON100 (6000 kg-days exposure) will significantly tighten the parameter space in case of nonobservation of dark matter, further shrinking the likelihood of discovering the non-SM Higgs bosons at the LHC.

Junjie Cao; Ken-ichi Hikasa; Wenyu Wang; Jin Min Yang; Li-Xin Yu

2010-08-14

251

NASA Technical Reports Server (NTRS)

A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

Willsky, A. S.

1976-01-01

252

Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0

Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.

Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.

1996-10-01

253

Directed searches for continuous gravitational waves from spinning neutron stars in binary systems

NASA Astrophysics Data System (ADS)

Gravitational wave detectors such as the Laser Interferometer Gravitational-wave Observatory (LIGO) seek to observe ripples in space predicted by General Relativity. Black holes, neutron stars, supernovae, the Big Bang and other sources can radiate gravitational waves. Original contributions to the LIGO effort are presented in this thesis: feedforward filtering, directed binary neutron star searches for continuous waves, and scientific outreach and education, as well as advances in quantum optical squeezing. Feedforward filtering removes extraneous noise from servo-controlled instruments. Filtering of the last science run, S6, improves LIGO's astrophysical range (+4.14% H1, +3.60% L1: +12% volume) after subtracting noise from auxiliary length control channels. This thesis shows how filtering enhances the scientific sensitivity of LIGO's data set during and after S6. Techniques for non-stationarity and verifying calibration and integrity may apply to Advanced LIGO. Squeezing is planned for future interferometers to exceed the standard quantum limit on noise from electromagnetic vacuum fluctuations; this thesis discusses the integration of a prototype squeezer at LIGO Hanford Observatory and impact on astrophysical sensitivity. Continuous gravitational waves may be emitted by neutron stars in low-mass X-ray binary systems such as Scorpius X-1. The TwoSpect directed binary search is designed to detect these waves. TwoSpect is the most sensitive of 4 methods in simulated data, projecting an upper limit of 4.23e-25 in strain, given a year-long data set at an Advanced LIGO design sensitivity of 4e-24 Hz. (-1/2). TwoSpect is also used on real S6 data to set 95% confidence upper limits (40 Hz to 2040 Hz) on strain from Scorpius X-1. A millisecond pulsar, X-ray transient J1751-305, is similarly considered. Search enhancements for Advanced LIGO are proposed. Advanced LIGO and fellow interferometers should detect gravitational waves in the coming decade. Methods in these thesis will benefit both the instrumental and analytical sides of observation.

Meadors, Grant David

254

We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the “Compute Unified Device Architecture” (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., ?2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet–planet interactions). Given the high-dimensionality of

Eric B. Ford

2009-01-01

255

PARALLEL IMPLEMENTATION OF VLSI HED CIRCUIT SIMULATION

14 PARALLEL IMPLEMENTATION OF VLSI HED CIRCUIT SIMULATION INFORMATICA 2/91 Keywords: circuit simulation, direct method, vvaveform relaxation, parallel algorithm, parallel computer architecture Srilata, India Junj Sile Marjan Spegel Jozef Stefan Institute, Ljubljana, Slovenia The importance of circuit

Silc, Jurij

256

NASA Astrophysics Data System (ADS)

An efficient algorithm named modified directional gradient descent searches to enhance the directional gradient descent search (DGDS) algorithm is presented to reduce computations. A modified search pattern with an adaptive threshold for early termination is applied to DGDS to avoid meaningless calculation after the searching point is good enough. A statistical analysis of best motion vector distribution is analyzed to decide the modified search pattern. Then a statistical model based on the characteristics of the block distortion information of the previous coded frame helps the early termination parameters selection, and a trade-off between the video quality and the computational complexity can be obtained. The simulation results show the proposed algorithm provides significant improvement in reducing the motion estimation (ME) by 17.81% of the average search points and 20% of ME time saving compared to the fast DGDS algorithm implemented in H.264/AVC JM 18.2 reference software according to different types of sequences, while maintaining a similar bit rate without losing picture quality.

Chen, Hung-Ming; Chen, Po-Hung; Lin, Cheng-Tso; Liu, Ching-Chung

2012-11-01

257

NASA Astrophysics Data System (ADS)

A microtubule (MT) is a hollow tube of approximately 25 nm diameter. The two ends of the tube are dissimilar and are designated as ‘plus’ and ‘minus’ ends. Motivated by the collective push and pull exerted by a bundle of MTs during chromosome segregation in a living cell, we have developed here a much simplified theoretical model of a bundle of parallel dynamic MTs. The plus-end of all the MTs in the bundle is permanently attached to a movable ‘wall’ by a device whose detailed structure is not treated explicitly in our model. The only requirement is that the device allows polymerization and depolymerization of each MT at the plus-end. In spite of the absence of external force and direct lateral interactions between the MTs, the group of polymerizing MTs attached to the wall create a load force against the group of depolymerizing MTs and vice versa; the load against a group is shared equally by the members of that group. Such indirect interactions among the MTs give rise to the rich variety of possible states of collective dynamics that we have identified by computer simulations of the model in different parameter regimes. The bi-directional motion of the cargo, caused by the load-dependence of the polymerization kinetics, is a ‘proof-of-principle’ that the bi-directional motion of chromosomes before cell division does not necessarily need active participation of motor proteins.

Ghanti, Dipanwita; Chowdhury, Debashish

2015-01-01

258

Stable computation of search directions for near-degenerate linear programming problems

In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.

Hough, P.D.

1997-03-01

259

We report the results of a direct search for the $^{229}$Th ($I^{p} = 3/2^+\\leftarrow 5/2^+$) nuclear isomeric transition, performed by exposing $^{229}$Th-doped LiSrAlF$_6$ crystals to tunable vacuum-ultraviolet synchrotron radiation and observing any resulting fluorescence. We also use existing nuclear physics data to establish a range of possible transition strengths for the isomeric transition. We find no evidence for the thorium nuclear transition between $7.3 {eV}$ and $8.8 {eV}$ with transition lifetime $(1-2){s} \\lesssim \\tau \\lesssim (2000-5600){s}$. This measurement excludes roughly half of the favored transition search area and can be used to direct future searches.

Justin Jeet; Christian Schneider; Scott T. Sullivan; Wade G. Rellergert; Saed Mirzadeh; A. Cassanho; H. P. Jenssen; Eugene V. Tkalya; Eric R. Hudson

2015-02-18

260

ERIC Educational Resources Information Center

Using Item Response Theory (IRT) and Confirmatory Factor Analysis (CFA), the goal of this study was to select a reduced pool of items from the French Canadian version of the Self-Directed Search--Activities Section (Holland, Fritzsche, & Powell, 1994). Two studies were conducted. Results of Study 1, involving 727 French Canadian students, showed…

Poitras, Sarah-Caroline; Guay, Frederic; Ratelle, Catherine F.

2012-01-01

261

GOAL DIRECTED VISUAL SEARCH BASED ON COLOR CUES: CO-OPERATIVE EFFECTS OF TOP-DOWN & BOTTOM-UP

of the top-down (cognitive cue) and bottom-up (low-level feature conspicuity) processes. Often during visual (1985), where a saliency map reflects the relative saliency of objects from their surrounding. A top-down1 GOAL DIRECTED VISUAL SEARCH BASED ON COLOR CUES: CO-OPERATIVE EFFECTS OF TOP-DOWN & BOTTOM

Canosa, Roxanne

262

The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two

Bernhard Michael Jatzeck

2000-01-01

263

Comparing the Chinese Career Key and the Self-Directed Search with High School Girls in Hong Kong

ERIC Educational Resources Information Center

A career interest inventory, the Chinese Career Key (CCK) adapted from the Career Key based on Holland's theory of vocational choice, was studied. The purpose of the study was to further examine the psychometric qualities and user satisfaction of the CCK by comparing it to the Self-Directed Search. Students at a girls' public high school (N = 130)…

Ting, Siu-Man Raymond

2007-01-01

264

The first search for sub-eV scalar fields via four-wave mixing at a quasi-parallel laser collider

A search for sub-eV scalar fields coupling to two photons has been performed via four-wave mixing at a quasi-parallel laser collider for the first time. The experiment demonstrates the novel approach to search for resonantly produced sub-eV scalar fields by combining two-color laser fields in the vacuum. The aim of this paper is to provide the concrete experimental setup and the analysis method based on specific combinations of polarization states between incoming and outgoing photons, which is extendable to higher intensity laser systems operated at high repetition rates. No significant signal of four-wave mixing was observed by combining a $0.2\\mu$J/0.75ns pulse laser and a 2mW CW laser on the same optical axis. Based on the prescription developed for this particular experimental approach, we obtained the upper limit at a confidence level of 95% on the coupling-mass relation.

Kensuke Homma; Takashi Hasebe; Kazuki Kume

2014-05-16

265

Dark matter direct search rates in simulations of the Milky Way and Sagittarius stream

We analyze self-consistent N-body simulations of the Milky Way disk and the ongoing disruption of the Sagittarius dwarf satellite to study the effect of Sagittarius tidal debris on dark matter detection experiments. In agreement with significant previous work, we reiterate that the standard halo model is insufficient to describe the non-Maxwellian velocity distribution of the Milky Way halo in our equilibrium halo-only and halo/galaxy models, and offer suggestions for correcting for this discrepancy. More importantly, we emphasize that the dark matter component of the leading tidal arm of the Sagittarius dwarf is significantly more extended than the stellar component of the arm, and also that the dark matter and stellar streams are not necessarily coaxial and may be offset by several kpc at the point at which they impact the Galactic disk. This suggests that the dark matter component of the Sagittarius debris is likely to have a non-negligible influence on dark matter detection experiments even when the stellar debris is centered several kpc from the solar neighborhood. Relative to models without an infalling Sagittarius dwarf, the Sagittarius dark matter debris in our models induces an energy-dependent enhancement of direct search event rates of as much as ? 20–45%, an energy-dependent reduction in the amplitude of the annual modulation of the event rate by as much as a factor of two, a shift in the phase of the annual modulation by as much as ? 20 days, and a shift in the recoil energy at which the modulation reverses phase. These influences of Sagittarius are of general interest in the interpretation of dark matter searches, but may be particularly important in the case of relatively light (m{sub ?}?<20 GeV/c{sup 2}) dark matter because the Sagittarius stream impacts the solar system at high speed compared to the primary halo dark matter.

Purcell, Chris W.; Zentner, Andrew R.; Wang, Mei-Yu, E-mail: cpurcell@pitt.edu, E-mail: zentner@pitt.edu, E-mail: mew56@pitt.edu [Department of Physics and Astronomy and Pittsburgh Particle physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, Pittsburgh 15260 (United States)

2012-08-01

266

NASA Astrophysics Data System (ADS)

The problem at hand is developing a controller design methodology that is generally applicable to autonomous systems with fairly accurate models. The controller design process has two parts: synthesis and analysis. Over the years, many synthesis and analysis methods have been proposed. An optimal method for all applications has not yet been found. Recent advances in computer technology have made computational methods more attractive and practical. The proposed method is an iterative computational method that automatically generates non-linear controllers with specified global performance. This dissertation describes this method which consists of using an analysis tool, continued propagation cell mapping (CPCM), as feedback to the synthesis tool, best estimate directed search (BEDS). Optimality in the design can be achieved with respect to time, energy, and/or robustness depending on the performance measure used. BEDS is based on a novel search concept: globally directing a random search. BEDS has the best of two approaches: gradient (or directed) search and random search. It possesses the convergence speed of a gradient search and the convergence robustness of a random search. The coefficients of the best controller at the time direct the search process until either a better controller is found or the search is terminated. CPCM is a modification of simple cell mapping (SCM). CPCM maintains the simplicity of SCM but provides accuracy near that of a point map (PM). CPCM evaluates the controller's complete and global performance efficiently and with easily tunable accuracy. This CPCM evaluation guarantees monotonic progress in the synthesis process. The method is successfully applied to the design of a TSK-type fuzzy logic (FL) controller and a Sliding Mode-type controller for the uncertain non-linear system of an inverted pendulum on a cart for large pole angles (+/-86 degrees). The resulting controller's performance compares favorably to other established methods designed with dynamic programing (DP) and genetic algorithms (GA). When CPCM is used as feedback to BEDS, the resulting design method quickly and automatically generates non-linear controllers with good global performance and without much a priori information about the desired control actions.

Rizk, Charbel George

1997-11-01

267

Search for Direct CP Violation in Decays of Hyperons Y. C. Chen a , R. A. Burnstein b , A.S.A. The E871 (HyperCP) experiment at FNAL is searching for direct CP violation in decays of \\Xi \\Gamma (\\Xi more data which will improve the sensitivity to Ã? 1 \\Theta 10 \\Gamma4 . 1 Introduction CP violation has

Fermilab Experiment E871

268

$H \\to \\gamma\\gamma$ search and direct photon pair production differential cross section

At a hadron collider, diphoton ({gamma}{gamma}) production allows detailed studies of the Standard Model (SM), as well as as searches for new phenomena, such as new heavy resonances, extra spatial dimensions or cascade decays of heavy new particles. Within the SM, continuum {gamma}{gamma}+X production is characterized by a steeply-falling {gamma}{gamma} mass spectrum, on top of which a heavy resonance decaying into {gamma}{gamma} can potentially be observed. In particular, this is considered one of the most promising discovery channels for a SM Higgs boson at the LHC, despite the small branching ratio of BR (H {yields} {gamma}{gamma}) {approx} 0.2% for 110 < M{sub Higgs} < 140 GeV. At the Tevatron, the dominant SM Higgs boson production mechanism is gluon fusion, followed by associated production with a W or Z boson, and vector boson fusion. While the SM Higgs production rate at the Tevatron is not sufficient to observe it in the {gamma}{gamma} mode, the Hgg and H{gamma}{gamma} couplings, being loop-mediated, are particularly sensitive to new physics effects. Furthermore, in some models beyond the SM, for instance, fermiophobic Higgs, with no couplings to fermions, the BR (H {yields} {gamma}{gamma}) can be enhanced significantly relative to the SM prediction, while has the SM-like production cross sections except the gluon fusion is absent. In this thesis, we present a search for a light Higgs boson in the diphoton final state using 4.2 {+-} 0.3 fb{sup -1} of the D0 Run II data, collected at the Fermilab Tevatron collider from April 2002 to December 2008. Good agreement between the data and the SM background prediction is observed. Since there is no evidence for new physics, we set 95% C.L. limits on the production cross section times the branching ratio ({sigma} x BR(H {yields} {gamma}{gamma})) relative to the SM-like Higgs prediction for different assumed Higgs masses. The observed limits ({sigma}(limit)/{sigma}(SM)) range from 11.9 to 35.2 for Higgs masses from 100 to 150 GeV, while the expected limits range from 17.5 to 32.0. This search is also interpreted in the context of the particular fermiophobic Higgs model. The corresponding results have reached the same sensitivity as a single LEP experiement, setting a lower limit on the fermiophobic Higgs of M{sub h{sub f}} > 102.5 GeV (M{sub h{sub f}} > 107.5 GeV expected). We are slightly below the combined LEP limit (M{sub h{sub f}} > 109.7 GeV). We also provide access to the M{sub h{sub f}} > 125 GeV region which was inaccessible at LEP. During the study, we found the major and irreducible background direct {gamma}{gamma} (DPP) production is not well modelled by the current theoretical predictions: RESBOS, DIPHOX or PYTHIA. There is {approx}20% theoretical uncertainty for the predicted values. Thus, for our Higgs search, we use the side-band fitting method to estimate DPP contribution directly from the data events. Furthermore, DPP production is also a significant background in searches for new phenomena, such as new heavy resonances, extra spatial dimensions, or cascade decays of heavy new particles. Thus, precise measurements of the DPP cross sections for various kinematic variables and their theoretical understanding are extremely important for future Higgs and new phenomena searches. In this thesis, we also present a precise measurement of the DPP single differential cross sections as a function of the diphoton mass, the transverse momentum of the diphoton system, the azimuthal angle between the photons, and the polar scattering angle of the photons, as well as the double differential cross sections considering the last three kinematic variables in three diphoton mass bins, using 4.2 fb{sup -1} data. These results are the first of their kind at D0 Run II, and in fact the double differential measurements are the first of their kind at Tevatron. The results are compared with different perturbative QCD predictions and event generators.

Bu, Xuebing; /Hefei, CUST

2010-06-01

269

A parallel gradient distribution algorithm for large-scale optimization

We present a Parallel Gradient Distribution method for the solution of the unconstrained optimization problem min f(x), x {element_of} R{sup n}, where f : R{sup n} {yields} R has continuous first and second partial derivatives and n is typically very large (order of thousands). Given p processors of a parallel computing system, the proposed algorithm is characterized by a parallel phase which produces p points, exploiting the portions of the gradient of the objective function assigned to each processor. Then a coordination phase follows, which determines a new iterate solving a minimization problem in a p + 1 dimensional space, on the basis of the previous iterate and the p points generated by the parallel phase. The parallel and coordination phases are implemented using a limited memory BFGS approach for determining the search direction, and a line search procedure based on the Wolfe sufficient decrease conditions. Global and superlinear convergence results are established in the case of uniformly convex problems. The proposed parallel algorithm is compared, in terms of numerical performance, with the partitioned Quasi-Newton method of Griewank and Toint, the Block Truncated Newton method of Nash and Sofer and the Conjugate Gradient method of Shanno and Phua, using a set of large scale structured optimization problems. Furthermore, the influence of the number of processors available by extensive computational experiments carried out on distributed memory systems (NCUBE and FUJITSU) and on a network of workstations (DEC Alpha) by using PVM.

Conforti, M.; Musmanno, R.

1994-12-31

270

Direct Search for Right-handed Neutrinos and Neutrinoless Double Beta Decay

We consider an extension of the Standard Model by two right-handed neutrinos, especially with masses lighter than charged $K$ meson. This simple model can realize the seesaw mechanism for neutrino masses and also the baryogenesis by flavor oscillations of right-handed neutrinos. We summarize the constraints on right-handed neutrinos from direct searches as well as the big bang nucleosynthesis. It is then found that the possible range for the quasi-degenerate mass of right-handed neutrinos is $M_N \\geq 163 \\MeV$ for normal hierarchy of neutrino masses, while $M_N = 188 \\text{--} 269 \\MeV$ and $M_N \\geq 285 \\MeV$ for inverted hierarchy case. Furthermore, we find in the latter case that the possible value of the Majorana phase is restricted for $M_N = 188 \\text{--} 350 \\MeV$, which leads to the fact that the rate of neutrinoless double beta decay is also limited.

Takehiko Asaka; Shintaro Eijima

2013-08-16

271

A Bayesian view of the current status of dark matter direct searches

Bayesian statistical methods offer a simple and consistent framework for incorporating uncertainties into a multi-parameter inference problem. In this work we apply these methods to a selection of current direct dark matter searches. We consider the simplest scenario of spin-independent elastic WIMP scattering, and infer the WIMP mass and cross-section from the experimental data with the essential systematic uncertainties folded into the analysis. We find that when uncertainties in the scintillation efficiency of XENON100 have been accounted for, the resulting exclusion limit is not sufficiently constraining to rule out the CoGeNT preferred parameter region, contrary to previous claims. In the same vein, we also investigate the impact of astrophysical uncertainties on the preferred WIMP parameters. We find that within the class of smooth and isotropic WIMP velocity distributions, it is difficult to reconcile the DAMA and the CoGeNT preferred regions by tweaking the astrophysics parameters alone. If we demand compatibility between these experiments, then the inference process naturally concludes that a high value for the sodium quenching factor for DAMA is preferred.

Arina, Chiara; Wong, Yvonne Y.Y. [Institut für Theoretische Teilchenphysik und Kosmologie, RWTH Aachen, 52056 Aachen (Germany); Hamann, Jan, E-mail: chiara.arina@physik.rwth-aachen.de, E-mail: hamann@phys.au.dk, E-mail: yvonne.wong@physik.rwth-aachen.de [Department of Physics and Astronomy, University of Aarhus, 8000 Aarhus C (Denmark)

2011-09-01

272

Banks of templates for directed searches of gravitational waves from spinning neutron stars

We construct efficient banks of templates suitable for directed searches of almost monochromatic gravitational waves originating from spinning neutron stars in our Galaxy in data being collected by currently operating interferometric detectors. We thus assume that the position of the gravitational-wave source in the sky is known, but we do not assume that the wave's frequency and its derivatives are a priori known. In the construction we employ a simplified model of the signal with constant amplitude and phase which is a polynomial function of time. All our template banks enable usage of the fast Fourier transform algorithm in the computation of the maximum-likelihood F-statistic for nodes of the grids defining the bank. We study and employ the dependence of the grid's construction on the choice of the position of the observational interval with respect to the origin of time axis. We also study the usage of the fast Fourier transform algorithms with nonstandard frequency resolutions achieved by zero padding or folding the data. In the case of the gravitational-wave signal with one spin-down parameter included we have found grids with covering thicknesses which are only 0.1-16% larger than the thickness of the optimal 2-dimensional hexagonal covering.

Pisarski, Andrzej; Jaranowski, Piotr; Pietka, Maciej [Faculty of Physics, University of Bialystok, Lipowa 41, 15-424 Bialystok (Poland)

2011-02-15

273

Banks of templates for directed searches of gravitational waves from spinning neutron stars

We construct efficient banks of templates suitable for directed searches of almost monochromatic gravitational waves originating from spinning nuetron stars in our Galaxy in data being collected by currently operating interferometric detectors. We thus assume that the position of the gravitational-wave source in the sky is known, but we do not assume that the wave's frequency and its derivatives are a priori known. In the construction we employ simplified model of the signal with constant amplitude and phase which is a polynomial function of time. All our template banks enable usage of the fast Fourier transform algorithm in the computation of the maximum-likelihood F-statistic for nodes of the grids defining the bank. We study and employ the dependence of the grid's construction on the choice of the position of the observational interval with respect to the origin of time axis. We also study the usage of the fast Fourier transform algorithms with non-standard frequency resolutions achieved by zero padding or folding the data. In the case of the gravitational-wave signal with one spindown parameter included we have found grids with covering thicknesses which are only 0.1%--16% larger than the thickness of the optimal two-dimensional hexagonal covering.

Andrzej Pisarski; Piotr Jaranowski; Maciej Pietka

2010-10-14

274

Direct dark matter searches—Test of the Big Bounce Cosmology

NASA Astrophysics Data System (ADS)

We consider the possibility of using dark matter particle's mass and its interaction cross section as a smoking gun signal of the existence of a Big Bounce at the early stage in the evolution of our currently observed universe. A study of dark matter production in the pre-bounce contraction and the post bounce expansion epochs of this universe reveals a new venue for achieving the observed relic abundance of our present universe. Specifically, it predicts a characteristic relation governing a dark matter mass and interaction cross section and a factor of 1/2 in thermally averaged cross section, as compared to the non-thermal production in standard cosmology, is needed for creating enough dark matter particle to satisfy the currently observed relic abundance because dark matter is being created during the pre-bounce contraction, in addition to the post-bounce expansion. As the production rate is lower than the Hubble expansion rate information of the bounce universe evolution is preserved. Therefore once the value of dark matter mass and interaction cross section are obtained by direct detection in laboratories, this alternative route becomes a signature prediction of the bounce universe scenario. This leads us to consider a scalar dark matter candidate, which if it is light, has important implications on dark matter searches.

Cheung, Yeuk-Kwan E.; Vergados, J. D.

2015-02-01

275

Despite decades of research, the exact pathogenic mechanisms underlying acute mountain sickness (AMS) are still poorly understood. This fact frustrates the search for novel pharmacological prophylaxis for AMS. The prevailing view is that AMS results from an insufficient physiological response to hypoxia and that prophylaxis should aim at stimulating the response. Starting off from the opposite hypothesis that AMS may be caused by an initial excessive response to hypoxia, we suggest that directly or indirectly blunting-specific parts of the response might provide promising research alternatives. This reasoning is based on the observations that (i) humans, once acclimatized, can climb Mt Everest experiencing arterial partial oxygen pressures (PaO2 ) as low as 25 mmHg without AMS symptoms; (ii) paradoxically, AMS usually develops at much higher PaO2 levels; and (iii) several biomarkers, suggesting initial activation of specific pathways at such PaO2 , are correlated with AMS. Apart from looking for substances that stimulate certain hypoxia triggered effects, such as the ventilatory response to hypoxia, we suggest to also investigate pharmacological means aiming at blunting certain other specific hypoxia-activated pathways, or stimulating their agonists, in the quest for better pharmacological prophylaxis for AMS. PMID:25778288

Lu, H; Wang, R; Xiong, J; Xie, H; Kayser, B; Jia, Z P

2015-05-01

276

We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

2006-08-01

277

NASA Astrophysics Data System (ADS)

In a recent paper, we have published a new algorithm, designated ‘iCycle’, for fully automated multi-criterial optimization of beam angles and intensity profiles. In this study, we have used this algorithm to investigate the relationship between plan quality and the extent of the beam direction search space, i.e. the set of candidate beam directions that may be selected for generating an optimal plan. For a group of ten prostate cancer patients, optimal IMRT plans were made for stereotactic body radiation therapy (SBRT), mimicking high dose rate brachytherapy dosimetry. Plans were generated for five different beam direction input sets: a coplanar (CP) set and four non-coplanar (NCP) sets. For CP treatments, the search space consisted of 72 orientations (5° separations). The NCP CyberKnife (CK) space contained all directions available in the robotic CK treatment unit. The fully non-coplanar (F-NCP) set facilitated the highest possible degree of freedom in selecting optimal directions. CK+ and CK++ were subsets of F-NCP to investigate some aspects of the CK space. For each input set, plans were generated with up to 30 selected beam directions. Generated plans were clinically acceptable, according to an assessment of our clinicians. Convergence in plan quality occurred only after around 20 included beams. For individual patients, variations in PTV dose delivery between the five generated plans were minimal, as aimed for (average spread in V95: 0.4%). This allowed plan comparisons based on organ at risk (OAR) doses, with the rectum considered most important. Plans generated with the NCP search spaces had improved OAR sparing compared to the CP search space, especially for the rectum. OAR sparing was best with the F-NCP, with reductions in rectum DMean, V40Gy, V60Gy and D2% compared to CP of 25%, 35%, 37% and 8%, respectively. Reduced rectum sparing with the CK search space compared to F-NCP could be largely compensated by expanding CK with beams with relatively large direction components along the superior-inferior axis (CK++). Addition of posterior beams (CK++ ? F-NCP) did not lead to further improvements in OAR sparing. Plans with 25 beams clearly performed better than 11-beam plans. For CP plans, an increase from 11 to 25 involved beams resulted in reductions in rectum DMean, V40Gy, V60Gy and D2% of 39%, 57%, 64% and 13%, respectively.

Rossi, Linda; Breedveld, Sebastiaan; Heijmen, Ben J. M.; Voet, Peter W. J.; Lanconelli, Nico; Aluwini, Shafak

2012-09-01

278

Totally parallel multilevel algorithms

NASA Technical Reports Server (NTRS)

Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

Frederickson, Paul O.

1988-01-01

279

Introduction Coping skills training interventions have been found to be efficacious in helping both patients and their partners manage the physical and emotional challenges they face following a cancer diagnosis. However, many of these interventions are costly and not sustainable. To overcome these issues, a self-directed format is increasingly used. The efficacy of self-directed interventions for patients has been supported; however, no study has reported on the outcomes for their partners. This study will test the efficacy of Coping-Together—a multimedia, self-directed, coping skills training intervention for patients with cancer and their partners. Methods and analysis The proposed three-group, parallel, randomised controlled trial will recruit patients diagnosed in the past 4?months with breast, prostate, colorectal cancer or melanoma through their treating clinician. Patients and their partners will be randomised to (1) a minimal ethical care (MEC) condition—selected Cancer Council New South Wales booklets and a brochure for the Cancer Council Helpline, (2) Coping-Together generic—MEC materials, the six Coping-Together booklets and DVD, the Cancer Council Queensland relaxation audio CD and login to the Coping-Together website or (3) Coping-Together tailored—MEC materials, the Coping-Together DVD, the login to the website and only those Coping-Together booklet sections that pertain to their direct concerns. Anxiety (primary outcome), distress, depression, dyadic adjustment, quality of life, illness or caregiving appraisal, self-efficacy and dyadic and individual coping will be assessed before receiving the study material (ie, baseline) and again at 3, 6 and 12?months postbaseline. Intention-to-treat and per protocol analysis will be conducted. Ethics and dissemination This study has been approved by the relevant local area health and University ethics committees. Study findings will be disseminated not only through peer-reviewed publications and conference presentations but also through educational outreach visits, publication of lay research summaries in consumer newsletters and publications targeting clinicians. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12613000491763 (03/05/2013) PMID:23883890

Lambert, Sylvie D; Girgis, Afaf; McElduff, Patrick; Turner, Jane; Levesque, Janelle V; Kayser, Karen; Mihalopoulos, Cathrine; Shih, Sophy T F; Barker, Daniel

2013-01-01

280

The usual nuclear recoil energy reconstruction employed by liquid xenon dark matter search experiments relies only on the primary scintillation photon signal. Energy reconstruction based on both the photon and electron signals yields a more accurate representation of search results. For a dark matter particle mass m~10 GeV, a nuclear recoil from a scattering event is more likely to be observed in the lower left corner of the typical search box, rather than near the nuclear recoil calibration centroid. In this region of the search box, the actual nuclear recoil energies are smaller than the usual energy scale suggests, by about a factor x2. Recent search results from the XENON100 experiment are discussed in light of these considerations.

Peter Sorensen

2012-10-18

281

NASA Technical Reports Server (NTRS)

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

Crockett, Thomas W.

1995-01-01

282

Complementarity of direct and indirect searches in the pMSSM

We explore the pMSSM parameter space in view of the constraints from SUSY and monojet searches at the LHC, from Higgs data and flavour physics observables, as well as from dark matter searches. We show that whilst the simplest SUSY scenarios are already ruled out, there are still many possibilities left over in the pMSSM. We discuss the complementarity between different searches and consistency checks which are essential in probing the pMSSM and will be even more important in the near future with the next round of data becoming available.

F. Mahmoudi; A. Arbey

2014-11-08

283

NSDL National Science Digital Library

Content prepared for the Supercomputing 2002 session on "Using Clustering Technologies in the Classroom". Contains a series of exercises for teaching parallel computing concepts through kinesthetic activities.

Paul Gray

284

Global steering of single gimballed control moment gyroscopes using a directed search

A guided depth-first search that manages null motion about torque-producing trajectories calculated with a singularity-robust inverse is proposed as a practical feedforward steering law that can globally avoid (or minimize the impact of) singular states in minimally-redundant systems of single gimballed control moment gyroscopes. Cost and heuristic functions are defined to guide the search procedure in improving gimbal trajectories. On-orbit

Joseph A. Paradiso

1992-01-01

285

Searches for the Minimal Supersymmetric Standard Model (MSSM) Higgs bosons are among the most promising channels for exploring new physics at the Tevatron. In particular, interesting regions of large $\\tan \\beta$ and small $m_A$ are probed by searches for heavy neutral Higgs bosons, A and H, when they decay to $\\tau^+ \\tau^-$ and $b\\bar{b}$. At the same time, direct searches for dark matter, such as CDMS, attempt to observe neutralino dark matter particles scattering elastically off nuclei. This can occur through t-channel Higgs exchange, which has a large cross section in the case of large $\\tan \\beta$ and small $m_A$. As a result, there is a natural interplay between the heavy, neutral Higgs searches at the Tevatron and the region of parameter space explored by CDMS. We show that if the lightest neutralino makes up the dark matter of our universe, current limits from CDMS strongly constrain the prospects of heavy, neutral MSSM Higgs discovery at the Tevatron (at 3 sigma with 4 fb^-1 per experiment) unless $|\\mu| \\gsim$ 400 GeV. The limits of CDMS projected for 2007 will increase this constraint to $|\\mu| \\gsim$ 800 GeV. On the other hand, if CDMS does observe neutralino dark matter in the near future, it will make the discovery of heavy, neutral MSSM Higgs bosons far more likely at the Tevatron.

Marcela Carena; Dan Hooper; Peter Skands

2006-08-22

286

NSDL National Science Digital Library

An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.

287

Multithreading and Parallel Microprocessors

Multithreading and Parallel Microprocessors Stephen Jenks Electrical Engineering and Computer Scalable Parallel and Distributed Systems Lab 4 Outline Parallelism in Microprocessors Multicore Processor Parallelism Parallel Programming for Shared Memory OpenMP POSIX Threads Java Threads Parallel Microprocessor

Shinozuka, Masanobu

288

We present first evidence for the so-called Head-Tail asymmetry signature of neutron-induced nuclear recoil tracks at energies down to 1.5 keV/amu using the 1m^3 DRIFT-IIc dark matter detector. This regime is appropriate for recoils induced by Weakly Interacting Massive Particle (WIMPs) but one where the differential ionization is poorly understood. We show that the distribution of recoil energies and directions induced here by Cf-252 neutrons matches well that expected from massive WIMPs. The results open a powerful new means of searching for a galactic signature from WIMPs.

S. Burgos; E. Daw; J. Forbes; C. Ghag; M. Gold; C. Hagemann; V. A. Kudryavtsev; T. B. Lawson; D. Loomba; P. Majewski; D. Muna; A. StJ. Murphy; G. G. Nicklin; S. M. Paling; A. Petkov; S. J. S. Plank; M. Robinson; N. Sanghi; D. P. Snowden-Ifft; N. J. C. Spooner; J. Turk; E. Tziaferi

2008-09-10

289

In this paper, we present a three-dimensional holographic imaging system. The proposed approach records a complex hologram of a real object using optical scanning holography, converts the complex form to binary data, and then reconstructs the recorded hologram using a spatial light modulator (SLM). The conversion from the recorded hologram to a binary hologram is achieved using a direct binary search algorithm. We present experimental results that verify the efficacy of our approach. To the best of our knowledge, this is the first time that a hologram of a real object has been reconstructed using a binary SLM. PMID:25836197

Leportier, Thibault; Park, Min Chul; Kim, You Seok; Kim, Taegeun

2015-02-01

290

A Search for Institutional Distinctiveness. New Directions for Community Colleges, Number 65.

ERIC Educational Resources Information Center

The essays in this collection argue that community colleges have much to gain by seeking out and maintaining positive recognition of the features that distinguish them from other colleges in the region and state. In addition, the sourcebook contains articles discussing the process of conducting a search for institutional distinctiveness and ways…

Townsend, Barbara K., Ed.

1989-01-01

291

A Parallel Tempering algorithm for probabilistic sampling and multimodal optimization

NASA Astrophysics Data System (ADS)

Non-linear inverse problems in the geosciences often involve probabilistic sampling of multimodal density functions or global optimization and sometimes both. Efficient algorithmic tools for carrying out sampling or optimization in challenging cases are of major interest. Here results are presented of some numerical experiments with a technique, known as Parallel Tempering, which originated in the field of computational statistics but is finding increasing numbers of applications in fields ranging from Chemical Physics to Astronomy. To date, experience in use of Parallel Tempering within earth sciences problems is very limited. In this paper, we describe Parallel Tempering and compare it to related methods of Simulated Annealing and Simulated Tempering for optimization and sampling, respectively. A key feature of Parallel Tempering is that it satisfies the detailed balance condition required for convergence of Markov chain Monte Carlo (McMC) algorithms while improving the efficiency of probabilistic sampling. Numerical results are presented on use of Parallel Tempering for trans-dimensional inversion of synthetic seismic receiver functions and also the simultaneous fitting of multiple receiver functions using global optimization. These suggest that its use can significantly accelerate sampling algorithms and improve exploration of parameter space in optimization. Parallel Tempering is a meta-algorithm which may be used together with many existing McMC sampling and direct search optimization techniques. It's generality and demonstrated performance suggests that there is significant potential for applications to both sampling and optimization problems in the geosciences.

Sambridge, Malcolm

2014-01-01

292

DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

Wang, Wenyi; Wu, Renbiao

2013-01-01

293

Parallel Arc Consistency Algorithms for Preprocessing Constraint Satisfaction

). Constraint satisfaction problems (CSPs) are prevalent in artificial intelligence ap plications. Versions for multichip module packaging. Keywords: Arc Consistency, Artificial Intelligence, Constraint Satisfaction Magazine. His research interests include parallel system architectures, parallel programming, search

Conrad, James M.

294

Search for direct CP violation in D0?h-h+ modes using semileptonic B decays

NASA Astrophysics Data System (ADS)

A search for direct CP violation in D0?h-h+ (where h=K or ?) is presented using data corresponding to an integrated luminosity of 1.0 fb collected in 2011 by LHCb in pp collisions at a centre-of-mass energy of 7 TeV. The analysis uses D0 mesons produced in inclusive semileptonic b-hadron decays to the D0?X final state, where the charge of the accompanying muon is used to tag the flavour of the D0 meson. The difference in the CP-violating asymmetries between the two decay channels is measured to be ?ACP=ACP(K-K+)-ACP(?-?+)=(0.49±0.30 (stat)±0.14 (syst))%. This result does not confirm the evidence for direct CP violation in the charm sector reported in other analyses. Difference in b-hadron mixture. Due to the momentum requirements in the trigger and selection, the relative contribution from B0 and B+ decays (the contribution from b-baryon and Bs0 decays can be neglected) can be different between the D0?K-K+ and D0??-?+ modes. In combination with a different effective production asymmetry for candidates from B0 and B+ mesons (the production asymmetry from B0 mesons is diluted due to B0 mixing) this could lead to a non-vanishing bias in ?ACP. Assuming isospin symmetry, the production cross-sections for B0 and B+ mesons are expected to be equal. Therefore, the ratio between B0 and B+ decays is primarily determined by their branching fractions to the D0?X final state. Using the inclusive branching fractions [24], B?DX, the B0 fraction is expected to be f(B0)=(37.5±2.9)%. From the simulation the difference in the B0 fraction due to the difference in selection efficiencies is found to be at maximum 1%. Further assuming a B+ production asymmetry of 1.0%[25] and assuming no B0 production asymmetry, the difference in the effective production asymmetry between the two modes is ˜0.02%. Difference in B decay time acceptance. A difference between the D0?K-K+ and D0??-?+ modes in the B decay time acceptance, in combination with B0 mixing, changes the effective B production asymmetry. Its effect is estimated from integrating the expected B decay time distributions at different starting values, such that the mean lifetime ratio corresponds to the observed B decay length difference (˜5%) in the two modes. Using the estimated B0 fraction and assuming a 1.0% production asymmetry, the effect on ?ACP is found to be 0.02%. Effect of the weighting procedure. After weighting the D0 distributions in pT and ?, only small differences remain in the muon kinematic distributions. In order to estimate the systematic uncertainty from the B production and detection asymmetry due to residual differences in the muon kinematic distributions, an additional weight is applied according to the muon (pT,?) and the azimuthal angle ?. The value of ?ACP changes by 0.05%. Difference in mistag asymmetry. The difference in the mistag rate between positive and negative tags contributes to the measured raw asymmetry. The mistag difference using D0?K-?+ decays is measured to be ??=(0.006±0.021)% (see Section 5.2). In case ?? is different for D0?K-K+ and D0??-?+ there can be a small effect from the mistag asymmetry. A systematic uncertainty of 0.02% is assigned, coming from the uncertainty on ??. Effect of different fit models. A possible asymmetry in the background from false D0 combinations is accounted for in the fit to the D0 mass distribution. Different models can change the fraction between signal and background and therefore change the observed asymmetry. The baseline model is modified by either using a single Gaussian function for the signal, a single Gaussian plus a Crystal Ball function for the signal, a first- or second-order polynomial for the background, by leaving the asymmetry in the reflection free, or by modifying the fit range for D0??-?+ to exclude the reflection peak. The largest variation changes the value of ?ACP by 0.035%. As another check, the asymmetry is determined without any fit by counting the number of positively- and negatively-tagged events in the signal window and subtracting the corres

Aaij, R.; Abellan Beteta, C.; Adeva, B.; Adinolfi, M.; Adrover, C.; Affolder, A.; Ajaltouni, Z.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; Anderlini, L.; Anderson, J.; Andreassen, R.; Appleby, R. B.; Aquines Gutierrez, O.; Archilli, F.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Bachmann, S.; Back, J. J.; Baesso, C.; Balagura, V.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Bauer, Th.; Bay, A.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Belogurov, S.; Belous, K.; Belyaev, I.; Ben-Haim, E.; Benayoun, M.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bettler, M.-O.; van Beuzekom, M.; Bien, A.; Bifani, S.; Bird, T.; Bizzeti, A.; Bjørnstad, P. M.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borghi, S.; Borgia, A.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Brambach, T.; van den Brand, J.; Bressieux, J.; Brett, D.; Britsch, M.; Britton, T.; Brook, N. H.; Brown, H.; Burducea, I.; Bursche, A.; Busetto, G.; Buytaert, J.; Cadeddu, S.; Callot, O.; Calvi, M.; Calvo Gomez, M.; Camboni, A.; Campana, P.; Campora Perez, D.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carranza-Mejia, H.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cattaneo, M.; Cauet, Ch.; Charles, M.; Charpentier, Ph.; Chen, P.; Chiapolini, N.; Chrzaszcz, M.; Ciba, K.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coca, C.; Coco, V.; Cogan, J.; Cogneras, E.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Couturier, B.; Cowan, G. A.; Craik, D.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; David, P.; David, P. N. Y.; De Bonis, I.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Silva, W.; De Simone, P.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Derkach, D.; Deschamps, O.; Dettori, F.; Dijkstra, H.; Dogaru, M.; Donleavy, S.; Dordei, F.; Dosil Suárez, A.; Dossett, D.; Dovbnya, A.; Dupertuis, F.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; van Eijk, D.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Elsby, D.; Falabella, A.; Färber, C.; Fardell, G.; Farinelli, C.; Farry, S.; Fave, V.; Ferguson, D.; Fernandez Albor, V.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fitzpatrick, C.; Fontana, M.; Fontanelli, F.; Forty, R.; Francisco, O.; Frank, M.; Frei, C.; Frosini, M.; Furcas, S.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gandelman, M.; Gandini, P.; Gao, Y.; Garofoli, J.; Garosi, P.; Garra Tico, J.; Garrido, L.; Gaspar, C.; Gauld, R.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gibson, V.; Gligorov, V. V.; Göbel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gordon, H.; Grabalosa Gándara, M.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graziani, G.; Grecu, A.; Greening, E.; Gregson, S.; Grünberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hampson, T.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; Hartmann, T.; He, J.; Heijne, V.; Hennessy, K.; Henrard, P.; Hernando Morata, J. A.; van Herwijnen, E.; Hicks, E.; Hill, D.; Hoballah, M.; Hombach, C.; Hopchev, P.; Hulsbergen, W.; Hunt, P.; Huse, T.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Iakovenko, V.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jans, E.; Jaton, P.; Jing, F.; John, M.; Johnson, D.; Jones, C. R.; Jost, B.; Kaballo, M.; Kandybei, S.; Karacson, M.; Karbach, T. M.; Kenyon, I. R.; Kerzel, U.; Ketel, T.; Keune, A.; Khanji, B.; Kochebina, O.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Korolev, M.; Kozlinskiy, A.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krocker, G.; Krokovny, P.; Kruse, F.; Kucharczyk, M.; Kudryavtsev, V.; Kvaratskheliya, T.; La Thi, V. N.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lambert, R. W.; Lanciotti, E.; Lanfranchi, G.; Langenbruch, C.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J.-P.; Lefèvre, R.; Leflat, A.; Lefrançois, J.; Leo, S.; Leroy, O.; Leverington, B.; Li, Y.; Li Gioi, L.; Liles, M.; Lindner, R.; Linn, C.; Liu, B.; Liu, G.; Lohn, S.; Longstaff, I.; Lopes, J. H.; Lopez Asamar, E.; Lopez-March, N.; Lu, H.; Lucchesi, D.; Luisier, J.; Luo, H.; Machefert, F.; Machikhiliyan, I. V.; Maciuc, F.; Maev, O.; Malde, S.; Manca, G.; Mancinelli, G.; Marconi, U.; Märki, R.; Marks, J.; Martellotti, G.; Martens, A.; Martin, L.; Martín Sánchez, A.; Martinelli, M.; Martinez Santos, D.; Martins Tostes, D.; Massafferri, A.; Matev, R.; Mathe, Z.; Matteuzzi, C.; Maurice, E.; Mazurov, A.; McCarthy, J.; McNulty, R.; Mcnab, A.

2013-06-01

295

We present the results of an analysis of data recorded at the Pierre Auger Observatory in which we search for groups of directionally-aligned events (or ''multiplets'') which exhibit a correlation between arrival direction and the inverse of the energy. These signatures are expected from sets of events coming from the same source after having been deflected by intervening coherent magnetic fields. The observation of several events from the same source would open the possibility to accurately reconstruct the position of the source and also measure the integral of the component of the magnetic field orthogonal to the trajectory of the cosmic rays. We describe the largest multiplets found and compute the probability that they appeared by chance from an isotropic distribution. We find no statistically significant evidence for the presence of multiplets arising from magnetic deflections in the present data.

Abreu, P.; /Lisbon, IST; Aglietta, M.; /Turin U. /INFN, Turin; Ahn, E.J.; /Fermilab; Albuquerque, I.F.M.; /Sao Paulo U.; Allard, D.; /APC, Paris; Allekotte, I.; /Buenos Aires, CONICET; Allen, J.; /New York U.; Allison, P.; /Ohio State U.; Alvarez Castillo, J.; /Mexico U., ICN; Alvarez-Muniz, J.; /Santiago de Compostela U.; Ambrosio, M.; /Naples U. /INFN, Naples /Nijmegen U., IMAPP

2011-11-01

296

Structure prediction of nanoclusters; a direct or a pre-screened search on the DFT energy landscape?

The atomic structure of inorganic nanoclusters obtained via a search for low lying minima on energy landscapes, or hypersurfaces, is reported for inorganic binary compounds: zinc oxide (ZnO)n, magnesium oxide (MgO)n, cadmium selenide (CdSe)n, and potassium fluoride (KF)n, where n = 1-12 formula units. The computational cost of each search is dominated by the effort to evaluate each sample point on the energy landscape and the number of required sample points. The effect of changing the balance between these two factors on the success of the search is investigated. The choice of sample points will also affect the number of required data points and therefore the efficiency of the search. Monte Carlo based global optimisation routines (evolutionary and stochastic quenching algorithms) within a new software package, viz. Knowledge Led Master Code (KLMC), are employed to search both directly and after pre-screening on the DFT energy landscape. Pre-screening includes structural relaxation to minimise a cheaper energy function - based on interatomic potentials - and is found to improve significantly the search efficiency, and typically reduces the number of DFT calculations required to locate the local minima by more than an order of magnitude. Although the choice of functional form is important, the approach is robust to small changes to the interatomic potential parameters. The computational cost of initial DFT calculations of each structure is reduced by employing Gaussian smearing to the electronic energy levels. Larger (KF)n nanoclusters are predicted to form cuboid cuts from the rock-salt phase, but also share many structural motifs with (MgO)n for smaller clusters. The transition from 2D rings to 3D (bubble, or fullerene-like) structures occur at a larger cluster size for (ZnO)n and (CdSe)n. Differences between the HOMO and LUMO energies, for all the compounds apart from KF, are in the visible region of the optical spectrum (2-3 eV); KF lies deep in the UV region at 5 eV and shows little variation. Extrapolating the electron affinities found for the clusters with respect to size results in the qualitatively correct work functions for the respective bulk materials. PMID:25017305

Farrow, M R; Chow, Y; Woodley, S M

2014-10-21

297

Footprint of Triplet Scalar Dark Matter in Direct, Indirect Search and Invisible Higgs Decay

In this talk, we will review Inert Triplet Model (ITM) which provide candidate for dark matter (DM) particles. Then we study possible decays of Higgs boson to DM candidate and apply current experimental data for invisible Higgs decay to constrain parameter space of ITM. We also consider indirect search for DM and use FermiLAT data to put constraints on parameter space. Ultimately we compare this limit with constraints provided by LUX experiment for low mass DM and invisible Higgs decay.

Ayazi, Seyed Yaser

2015-01-01

298

Footprint of Triplet Scalar Dark Matter in Direct, Indirect Search and Invisible Higgs Decay

In this talk, we will review Inert Triplet Model (ITM) which provide candidate for dark matter (DM) particles. Then we study possible decays of Higgs boson to DM candidate and apply current experimental data for invisible Higgs decay to constrain parameter space of ITM. We also consider indirect search for DM and use FermiLAT data to put constraints on parameter space. Ultimately we compare this limit with constraints provided by LUX experiment for low mass DM and invisible Higgs decay.

Seyed Yaser Ayazi; S. Mahdi Firouzabadi

2015-01-25

299

A parallelized binary search tree

Technology Transfer Automated Retrieval System (TEKTRAN)

PTTRNFNDR is an unsupervised statistical learning algorithm that detects patterns in DNA sequences, protein sequences, or any natural language texts that can be decomposed into letters of a finite alphabet. PTTRNFNDR performs complex mathematical computations and its processing time increases when i...

300

In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle ($\\alpha$) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays $B_s \\to \\mu^+ \\mu^-$ and $b\\to s \\gamma$ are also considered. We find that low $M_{A}$ $(\\lesssim 350)$ and high $tan\\beta$ $(\\gtrsim 25)$ regions are disfavoured by the combined effect of the global analysis and flavour data. However, regions with Higgs mixing angle $\\alpha \\sim$ 0.1 - 0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar ($\\rm H/A$) and charged Higgs ($\\rm H^\\pm$) masses and branchings at the LHC. It has been found that regions with low to moderate values of $\\tan\\beta$ with light additional Higgses (mass $\\le$ 600 GeV) are unconstrained by the data, while the regions with $\\tan\\beta >$ 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the low $\\tan\\beta$ ($\\le$ 10) region at the high luminosity run of LHC are also discussed and it has been found that even the high luminosity (3000 $\\rm fb^{-1}$) run of LHC may not have enough sensitivity to probe the entire region of parameter space.

Biplob Bhattacherjee; Amit Chakraborty; Arghya Choudhury

2015-04-16

301

the Order Graph Method (OGM) as an e ective technique for parallel task scheduling and implement OGM in a scheduling tool to highlight the method's e ectiveness. OGM is developed as a methodology to automatically interpro- cessor communication and network topology. Using OGM, Digital Signal Processing (DSP) algorithms

Reeves, Douglas S.

302

the Order Graph Method (OGM) as an effective technique for parallel task scheduling and implement OGM in a scheduling tool to highlight the method's effectiveness. OGM is developed as a methodology to automatically interproÂ cessor communication and network topology. Using OGM, Digital Signal Processing (DSP) algorithms

Reeves, Douglas S.

303

ERIC Educational Resources Information Center

This book contains 35 papers about planning and holding future search conferences, as well as their benefits and likely future directions. The following papers are included: "Applied Common Sense" (Weisbord); "Inventing the Search Conference" (Weisbord); "Building Collaborative Communities" (Schindler-Rainman, Lippitt); "Parallel Paths to…

Weisbord, Marvin R.; And Others

304

Applied Parallel Metadata Indexing

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

Jacobi, Michael R [Los Alamos National Laboratory

2012-08-01

305

Demonstrating Forces between Parallel Wires.

ERIC Educational Resources Information Center

Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

Baker, Blane

2000-01-01

306

I'sge III. A Review III. B MasPar System Architecture 18 III. C MasPar FORTRAN 21 III. D MasPar Programming Environment IV MLOCFES: PARALLEL VERSION IV. A Potential Regions of Parallelism in LOCFES IV. B FORTRAN gg Adaptations IV. C Program.... 2 MasPar MP-1 System Diagram (Adapted from MasPar System Overview). . . 19 3 The flow of control. 31 LIST OF TABLES TABLE Page Analytical vs computational (MLOCFES) solution for I = 1, K = 1, and L=1 Analytical vs computational (MLOCFES...

Shah, Ronak C.

1991-01-01

307

Parallel Magnetic Resonance Imaging

The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

Uecker, Martin

2015-01-01

308

I survey physics theories involving parallel universes, which form a natural four-level hierarchy of multiverses allowing progressively greater diversity. Level I: A generic prediction of inflation is an infinite ergodic universe, which contains Hubble volumes realizing all initial conditions - including an identical copy of you about 10^{10^29} meters away. Level II: In chaotic inflation, other thermalized regions may have

Max Tegmark

2003-01-01

309

Research on parallel algorithm for sequential pattern mining

NASA Astrophysics Data System (ADS)

Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

2008-03-01

310

Parallel algorithms for machine intelligence and vision

Recent research results in the area of parallel algorithms for problem solving, search, natural language parsing, and computer vision, are brought together in this book. The research reported demonstrates that substantial parallelism can be exploited in various machine intelligence and vision problems. Extensive experimental studies are presented that will help the reader in assessing the usefulness of an approach to a specific problem.

Kumar, V. (Minnesota Univ., Minneapolis, MN (USA)); Gopalakrishnan, P.S.; Kanal, L.N.

1990-01-01

311

Galactic Cosmic-ray (CR) transport parameters are usually constrained by the boron-to-carbon ratio. This procedure is generically plagued with degeneracies between the diffusion coefficient and the vertical extent of the Galactic magnetic halo. The latter is of paramount importance for indirect dark matter (DM) searches, because it fixes the amount of DM annihilation or decay that contributes to the local antimatter CR flux. These degeneracies could be broken by using secondary radioactive species, but the current data still have large error bars, and this method is extremely sensitive to the very local interstellar medium (ISM) properties. Here, we propose to use the low-energy CR positrons in the GeV range as another direct constraint on diffusion models. We show that the PAMELA data disfavor small diffusion halo ($L\\lesssim 3$ kpc) and large diffusion slope models, and exclude the minimal ({\\em min}) configuration (Maurin et al. 2001, Donato et al. 2004) widely used in the literature to bracket the uncertainties in the DM signal predictions. This is complementary to indirect constraints (diffuse radio and gamma-ray emissions) and has strong impact on DM searches. Indeed this makes the antiproton constraints more robust while enhancing the discovery/exclusion potential of current and future experiments, like AMS-02 and GAPS, especially in the antiproton and antideuteron channels.

Julien Lavalle; David Maurin; Antje Putze

2015-01-22

312

In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle ($\\alpha$) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays $B_s \\to \\mu^+ \\mu^-$ and $b\\to s \\gamma$ are also considered. We find that low $M_{A}$ $(\\lesssim 350)$ and high $tan\\beta$ $(\\gtrsim 25)$ regions are disfavoured by the combined effect of the global analysis and flavour data. However, regions with Higgs mixing angle $\\alpha \\sim$ 0.1 - 0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar ($\\rm H/A$) and charged Higgs ($\\rm H^\\pm$) masses and branchings at the LHC. It has been found that regions with low to modera...

Bhattacherjee, Biplob; Choudhury, Arghya

2015-01-01

313

Due to a very low event rate expected in direct dark matter search experiments, a good understanding of every background component is crucial. Muon-induced neutrons constitute a prominent background, since neutrons lead to nuclear recoils and thus can mimic a potential dark matter signal. EDELWEISS is a Ge-bolometer experiment searching for WIMP dark matter. It is located in the Laboratoire Souterrain de Modane (LSM, France). We have measured muon-induced neutrons by means of a neutron counter based on Gd-loaded liquid scintillator. Studies of muon-induced neutrons are presented and include development of the appropriate MC model based on Geant4 and analysis of a 1000-days measurement campaign in LSM. We find a good agreement between measured rates of muon-induced neutrons and those predicted by the developed model with full event topology. The impact of the neutron background on current EDELWEISS data-taking as well as for next generation experiments such as EURECA is briefly discussed.

Kozlov, Valentin [Karlsruhe Institute of Technology, Institut für Kernphysik, Postfach 3640, 76021 Karlsruhe (Germany)] [Karlsruhe Institute of Technology, Institut für Kernphysik, Postfach 3640, 76021 Karlsruhe (Germany); Collaboration: EDELWEISS Collaboration

2013-08-08

314

NASA Astrophysics Data System (ADS)

Galactic cosmic-ray (CR) transport parameters are usually constrained by the boron-to-carbon ratio. This procedure is generically plagued with degeneracies between the diffusion coefficient and the vertical extent of the Galactic magnetic halo. The latter is of paramount importance for indirect dark matter (DM) searches because it fixes the amount of DM annihilation or decay that contributes to the local antimatter CR flux. These degeneracies could be broken by using secondary radioactive species, but the current data still have large error bars, and this method is extremely sensitive to the very local interstellar medium properties. Here, we propose to use the low-energy CR positrons in the GeV range as another direct constraint on diffusion models. We show that the PAMELA data disfavor small diffusion halo (L?3 kpc) and large diffusion slope models and exclude the minimal configuration [Maurin et al. Astrophys. J. 555, 585 (2001); Donato et al. Phys. Rev. D 69, 063501 (2004)] widely used in the literature to bracket the uncertainties in the DM signal predictions. This is complementary to indirect constraints (diffuse radio and gamma-ray emissions) and has a strong impact on DM searches. Indeed, this makes the antiproton constraints more robust while enhancing the discovery/exclusion potential of current and future experiments, like AMS-02 and GAPS, especially in the antiproton and antideuteron channels.

Lavalle, Julien; Maurin, David; Putze, Antje

2014-10-01

315

We derive simple analytic expressions for the (coherent and semi-coherent) phase metrics of continuous-wave sources in low-eccentricity binary systems, both for the long-segment and short- segment regimes (compared to the orbital period). The resulting expressions correct and extend previous results found in the literature. We present results of extensive Monte-Carlo studies comparing metric mismatch predictions against the measured loss of detection statistic for binary parameter offsets. The agreement is generally found to be within ~ 10%-30%. As an application of the metric template expressions, we estimate the optimal achievable sensitivity of an Einstein@Home directed search for Scorpius X-1, under the assumption of sufficiently small spin wandering. We find that such a search, using data from the upcoming advanced detectors, would be able to beat the torque- balance level [1,2] up to a frequency of ~ 500 - 600 Hz, if orbital eccentricity is well-constrained, and up to a frequency of ~ 160 - 200 Hz for m...

Leaci, Paola

2015-01-01

316

We derive simple analytic expressions for the (coherent and semi-coherent) phase metrics of continuous-wave sources in low-eccentricity binary systems, both for the long-segment and short- segment regimes (compared to the orbital period). The resulting expressions correct and extend previous results found in the literature. We present results of extensive Monte-Carlo studies comparing metric mismatch predictions against the measured loss of detection statistic for binary parameter offsets. The agreement is generally found to be within ~ 10%-30%. As an application of the metric template expressions, we estimate the optimal achievable sensitivity of an Einstein@Home directed search for Scorpius X-1, under the assumption of sufficiently small spin wandering. We find that such a search, using data from the upcoming advanced detectors, would be able to beat the torque- balance level [1,2] up to a frequency of ~ 500 - 600 Hz, if orbital eccentricity is well-constrained, and up to a frequency of ~ 160 - 200 Hz for more conservative assumptions about the uncertainty on orbital eccentricity.

Paola Leaci; Reinhard Prix

2015-02-03

317

In regions of large tanbeta and small mAlpha, searches for heavy neutral minimal supersymmetric standard model (MSSM) Higgs bosons at the Tevatron are promising. At the same time, rates in direct dark matter experiments, such as CDMS, are enhanced in the case of large tanbeta and small mAlpha. As a result, there is a natural interplay between the heavy, neutral Higgs searches at the Tevatron and the region of parameter space explored by CDMS. We show that if the lightest neutralino makes up the dark matter of our universe, current limits from CDMS strongly constrain the prospects of heavy, neutral MSSM Higgs discovery at the Tevatron unless |mu| greater or approximately 400 GeV. The limits of CDMS projected for 2007 will increase this constraint to |mu| greater or approximately 800 GeV. If CDMS does observe neutralinos in the near future, however, it will make the discovery of Higgs bosons at the Tevatron far more likely. PMID:17026093

Carena, Marcela; Hooper, Dan; Skands, Peter

2006-08-01

318

We discuss how the Zee-Babu model can be tested combining information from neutrino data, low-energy experiments and direct searches at the LHC. We update previous analysis in the light of the recent measurement of the neutrino mixing angle $\\theta_{13}$, the new MEG limits on $\\mu \\rightarrow e \\gamma$, the lower bounds on doubly-charged scalars coming from LHC data, and, of course, the discovery of a 125 GeV Higgs boson by ATLAS and CMS. In particular, we find that the new singly- and doubly-charged scalars are accessible at the second run of the LHC, yielding different signatures depending on the neutrino hierarchy and on the values of the phases. We also discuss in detail the stability of the potential.

Juan Herrero-Garcia; Miguel Nebot; Nuria Rius; Arcadi Santamaria

2014-10-08

319

Searches for Direct CP Violation in D+ Decays And for D0 Anti-D0 Mixing

The authors present preliminary results of a search for direct CP violation in D{sup +} {yields} K{sup +}K{sup -} {pi}{sup +} decays using 87 fb{sup -1} of data acquired by the Babar experiment running on and near the {Upsilon}(4S) from 1999-2002. The authors report the asymmetries in the signal mode and in the main resonant subchannels. Based on the same dataset, they also report a new 90% CL upper limit of 0.0042 on the rate of D{sup 0}-{bar D}{sup 0} mixing using the decay modes D*{sup +} {yields} D{sup 0}{pi}{sup +}, D{sup 0} {yields} [K/K*]ev (+c.c.).

Purohit, M.V.; /South Carolina U.

2005-10-11

320

Assuming the lightest neutralino solely composes the cosmic dark matter, we examine the constraints of the CDMS-II and XENON100 dark matter direct searches on the parameter space of the minimal supersymmetric standard model (MSSM) Higgs sector. We find that the current CDMS-II/XENON100 limits can exclude some of the parameter space which survive the constraints from the dark matter relic density and various collider experiments. We also find that in the currently allowed parameter space, the charged Higgs boson is hardly accessible at the LHC for an integrated luminosity of 30 fb{sup -1}, while the neutral non-SM (standard model) Higgs bosons (H,A) may be accessible in some allowed region characterized by a large {mu}. The future XENON100 (6000 kg-days exposure) will significantly tighten the parameter space in case of nonobservation of dark matter.

Cao, Junjie [Department of Physics, Henan Normal University, Xinxiang 453007 (China); Hikasa, Ken-ichi [Department of Physics, Tohoku University, Sendai 980-8578 (Japan); Wang, Wenyu [Institute of Theoretical Physics, College of Applied Science, Beijing University of Technology, Beijing 100124 (China); Yang, Jin Min; Yu, Li-Xin [Institute of Theoretical Physics, Academia Sinica, Beijing 100190 (China)

2010-09-01

321

Parallel hierarchical global illumination

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

Snell, Q.O.

1997-10-08

322

An information-theoretic approach to text searching in direct access systems

Using direct access computer files of bibliographic information, an attempt is made to overcome one of the problems often associated with information retrieval, namely, the maintenance and use of large dictionaries, the greater part of which is used only infrequently. A novel method is presented, which maps the hyperbolic frequency distribution of text characteristics onto a rectangular distribution. This is

Ian J. Barton; Susan E. Creasey; Michael F. Lynch; Michael J. Snell

1974-01-01

323

Tools for model-independent bounds in direct dark matter searches

We discuss a framework (based on non-relativistic operators) and a self-contained set of numerical tools to derive the bounds from some current direct detection experiments on virtually any arbitrary model of Dark Matter elastically scattering on nuclei.

Cirelli, Marco [Institut de Physique Théorique, CNRS, URA 2306 and CEA/Saclay, F-91191 Gif-sur-Yvette (France); Nobile, Eugenio Del; Panci, Paolo, E-mail: marco.cirelli@cea.fr, E-mail: delnobile@physics.ucla.edu, E-mail: panci@cp3-origins.net [CP3-Origins and DIAS, University of Southern Denmark, Campusvej 55, DK-5230 Odense M (Denmark)

2013-10-01

324

NASA Astrophysics Data System (ADS)

Low--mass stars between 0.1--0.6 M? are the most abundant members our galaxy and may be the most common sites of planet formation, but little is known about the outer architecture of their planetary systems. We have carried out a high-contrast adaptive imaging search for gas giant planets between 1--13 MJup around 122 newly identified young M dwarfs in the solar neighborhood ( ? 35 pc). Half of our targets are younger than 145 Myr, and 90% are younger than 580 Myr. After removing 39 resolved stellar binaries, our homogeneous sample of 83 single young M dwarfs makes it the largest imaging search for planets around low--mass stars to date. Our H- and K- band coronagraphic observations with Subaru/HiCIAO and Keck/NIRC2 achieve typical contrasts of 9--13 mag and 12--14 mag at 100, respectively, which corresponds to limiting masses of ˜1--10 M Jup at 10--30 AU for most of our sample. We discovered four brown dwarfs with masses between 25--60 MJup at projected separations of 4--190 AU. Over 100 candidate planets were discovered, nearly all of which were found to be background stars from follow-up second epoch imaging. Our null detection of planets nevertheless provides strong statistical constraints on the occurrence rate of giant planets around M dwarfs. Assuming circular orbits and a logarithmically-flat power law distribution in planet mass and semi--major axis of the form d 2N=(dloga dlogm) infinity m0 a0, we measure an upper limit (at the 95% confidence level) of 8.8% and 12.6% for 1--13 MJup companions between 10--100 AU for hot start and cold start evolutionary models, respectively. For massive gas giant planets in the 5--13 M Jup range like those orbiting HR 8799, GJ 504, and beta Pictoris, we find that fewer than 5.3% (7.8%) of M dwarfs harbor these planets between 10--100 AU for a hot start (cold start) formation scenario. Our best constraints are for brown dwarf companions; the frequency of 13--75 MJup companions between (de--projected) physical separations of 10--100 AU is 2.1+2.1-1.2 %. Altogether, our results show that gas giant planets, especially massive ones, are rare in the outskirts of M dwarf planetary systems. If disk instability is a viable way to form planets, our constraints for the most common type of star imply that overall it is an inefficient mechanism.

Bowler, Brendan Peter

325

Parallel Anisotropic Tetrahedral Adaptation

NASA Technical Reports Server (NTRS)

An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

Park, Michael A.; Darmofal, David L.

2008-01-01

326

The detection of the theoretically expected dark matter is central to particle physics and cosmology. Current fashionable supersymmetric models provide a natural dark matter candidate which is the lightest supersymmetric particle (LSP). The allowed parameter space of such models combined with fairly well understood physics (quark substructure of the nucleon and nuclear structure) permit the evaluation of the event rate for LSP-nucleus elastic scattering. The thus obtained event rates, which sensitively depend on the allowed parameter space parameters, are usually very low or even undetectable. So, for background reduction, one would like to exploit two nice features of the reaction, the directional rate, which depends on the sun's direction of motion and the modulation effect, i.e. the dependence of the event rate on the earth's annual motion. In the present paper we study these phenomena in a specific class of non isothermal models, which take into account the late in-fall of dark matter into our galaxy, producing flows of caustic rings. We find that the modulation effect arising from such models is smaller than that found previously with isothermal symmetric velocity distributions and much smaller compared to that obtained using a realistic asymmetric distribution with enhanced dispersion in the galactocentric direction.

J. D. Vergados

2001-01-02

327

Future directions in the microwave cavity search for dark matter axions

NASA Astrophysics Data System (ADS)

The axion is a light pseudoscalar particle which suppresses CP-violating effects in strong interactions and also happens to be an excellent dark matter candidate. Axions constituting the dark matter halo of our galaxy may be detected by their resonant conversion to photons in a microwave cavity permeated by a magnetic field. The current generation of the microwave cavity experiment has demonstrated sensitivity to plausible axion models, and upgrades in progress should achieve the sensitivity required for a definitive search, at least for low mass axions. However, a comprehensive strategy for scanning the entire mass range, from 1-1000 ?eV, will require significant technological advances to maintain the needed sensitivity at higher frequencies. Such advances could include sub-quantum-limited amplifiers based on squeezed vacuum states, bolometers, and/or superconducting microwave cavities. The Axion Dark Matter eXperiment at High Frequencies (ADMX-HF) represents both a pathfinder for first data in the 20-100 ?eV range ( 5-25 GHz), and an innovation test-bed for these concepts.

Shokair, T. M.; Root, J.; van Bibber, K. A.; Brubaker, B.; Gurevich, Y. V.; Cahn, S. B.; Lamoreaux, S. K.; Anil, M. A.; Lehnert, K. W.; Mitchell, B. K.; Reed, A.; Carosi, G.

2014-07-01

328

Parallel multi-computers and artificial intelligence

This book examines the present state and future direction of multicomputer parallel architectures for artificial intelligence research and development of artificial intelligence applications. The book provides a survey of the large variety of parallel architectures, describing the current state of the art and suggesting promising architectures to produce artificial intelligence systems such as intelligence systems such as intelligent robots. This book integrates artificial intelligence and parallel processing research areas and discusses parallel processing from the viewpoint of artificial intelligence.

Uhr, L.

1986-01-01

329

Searching for Supersymmetric Dark Matter- The Directional Rate for Caustic Rings

The detection of the theoretically expected dark matter is central to particle physics and cosmology. Current fashionable supersymmetric models provide a natural dark matter candidate which is the lightest supersymmetric particle (LSP). The theoretically obtained event rates are usually very low or even undetectable. So the experimentalists would like to exploit special signatures like the directional rates and the modulation effect. We study these signatures in the present paper focusing on a specific class of non-isothermal models involving flows of caustic rings.

J. D. Vergados

2000-10-22

330

A search for strong-field direct two electron ionization using coincidence spectroscopy

We report on our program in detecting two-electron ionization using electron-electron and electron-ion coincidence measurements. The coincidence techniques have been applied to the multiphoton ionization (MPI) of xenon atoms with 0.527 [mu]m excitation. The results show that direct two electron ionization is not occurring which is in variance with an earlier report. We also present a polarization study on the MPI of helium at 0.62 [mu]m and discuss these results in context of existing models.

Agostini, P.; Mevel, E.; Breger, P. (CEA Centre d'Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Service de Recherches sur les Surfaces et de l'Irradiation de la Matiere); Walker, B.; Yang, B.; DiMauro, L.F. (Brookhaven National Lab., Upton, NY (United States))

1993-01-01

331

A search for strong-field direct two electron ionization using coincidence spectroscopy

We report on our program in detecting two-electron ionization using electron-electron and electron-ion coincidence measurements. The coincidence techniques have been applied to the multiphoton ionization (MPI) of xenon atoms with 0.527 {mu}m excitation. The results show that direct two electron ionization is not occurring which is in variance with an earlier report. We also present a polarization study on the MPI of helium at 0.62 {mu}m and discuss these results in context of existing models.

Agostini, P.; Mevel, E.; Breger, P. [CEA Centre d`Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Service de Recherches sur les Surfaces et de l`Irradiation de la Matiere; Walker, B.; Yang, B.; DiMauro, L.F. [Brookhaven National Lab., Upton, NY (United States)

1993-05-01

332

A direct imaging search for close stellar and sub-stellar companions to young nearby stars

NASA Astrophysics Data System (ADS)

A total of 28 young nearby stars (ages {? 60} Myr) have been observed in the K_s-band with the adaptive optics imager Naos-Conica of the Very Large Telescope at the Paranal Observatory in Chile. Among the targets are ten visual binaries and one triple system at distances between 10 and 130 pc, all previously known. During a first observing epoch a total of 20 faint stellar or sub-stellar companion-candidates were detected around seven of the targets. These fields, as well as most of the stellar binaries, were re-observed with the same instrument during a second epoch, about one year later. We present the astrometric observations of all binaries. Their analysis revealed that all stellar binaries are co-moving. In two cases (HD 119022 AB and FG Aqr B/C) indications for significant orbital motions were found. However, all sub-stellar companion candidates turned out to be non-moving background objects except PZ Tel which is part of this project but whose results were published elsewhere. Detection limits were determined for all targets, and limiting masses were derived adopting three different age values; they turn out to be less than 10 Jupiter masses in most cases, well below the brown dwarf mass range. The fraction of stellar multiplicity and of the sub-stellar companion occurrence in the star forming regions in Chamaeleon are compared to the statistics of our search, and possible reasons for the observed differences are discussed. Based on observations made with ESO telescopes at Paranal Observatory under programme IDs 083.C-0150(B), 084.C-0364(A), 084.C-0364(B), 084.C-0364(C), 086.C-0600(A) and 086.C-0600(B).

Vogt, N.; Mugrauer, M.; Neuhäuser, R.; Schmidt, T. O. B.; Contreras-Quijada, A.; Schmidt, J. G.

2015-01-01

333

The bibliography contains citations concerning a concept in computers called Massively Parallel Processing. The processing power of a computer may be increased by using numerous processors in parallel and feeding data through a number of different computational paths at the same time. The citations explore these computers and their practical uses, and include case studies, specific problems solved, theory, and future possibilities and needs. Applications of neural network modeling, pattern recognition, image processing, local area routing, and genetic sequence comparison are discussed. (Contains 250 citations and includes a subject term index and title list.)

Not Available

1992-08-01

334

Direct search for dark matter—striking the balance—and the future

NASA Astrophysics Data System (ADS)

Weakly Interacting Massive Particles (WIMPs) are among the main candidates for the relic dark matter (DM). The idea of the direct DM detection relies on elastic spin-dependent (SD) and spin-independent (SI) interaction of WIMPs with target nuclei. In this review paper the relevant formulae for WIMP event rate calculations are collected. For estimations of the WIMP-proton and WIMP-neutron SD and SI cross sections the effective low-energy minimal supersymmetric standard model is used. The traditional one-coupling-dominance approach for evaluation of the exclusion curves is described. Further, the mixed spin-scalar coupling approach is discussed. It is demonstrated, taking the high-spin 73Ge dark matter experiment HDMS as an example, how one can drastically improve the sensitivity of the exclusion curves within the mixed spin-scalar coupling approach, as well as due to a new procedure of background subtraction from the measured spectrum. A general discussion on the information obtained from exclusion curves is given. The necessity of clear WIMP direct detection signatures for a solution of the dark matter problem, is pointed out.

Bednyakov, V. A.; Klapdor-Kleingrothaus, H. V.

2009-09-01

335

Mixture diffusion of two dyes (C.I. Direct Blue 15 (DB15) and C.I. Direct Yellow 12 (DY12)) with different affinity onto the substrate into cellulose membrane from the binary solution was studied at 55°C. Uptake curves and concentration–distance profiles were measured experimentally in the ratios (DB15:DY12) 1:0.5, 1:1 and 1:2. It was examined whether the diffusion of the dyes could be

Masako Maekawa; Haruko Nagai; Kayoko Magara

2000-01-01

336

Implementing a parallel C++ runtime system for scalable parallel systems

pC++ is a language extension to C++ designed toallow programmers to compose "concurrent aggregate"collection classes which can be aligned and distributedover the memory hierarchy of a parallel machine ina manner modeled on the High Performance FortranForum (HPFF) directives for Fortran 90. pC++ allowsthe user to write portable and efficient code whichwill run on a wide range of scalable parallel computersystems.

A. Malony; B. Mohr; P. Beckman; D. Gannon; S. Yang; F. Bodin; S. Kesavan

1993-01-01

337

The workshop ‘Spatial models in animal ecology, management and conservation’ held at Silwood Park (UK), 9–11 March 2010, aimed to synthesize recent progress in modelling the spatial dynamics of individuals, populations and species ranges and to provide directions for research. It brought together marine and terrestrial researchers working on spatial models at different levels of organization, using empirical as well as theory-driven approaches. Different approaches, temporal and spatial scales, and practical constraints predominate at different levels of organization and in different environments. However, there are theoretical concepts and specific methods that can fruitfully be transferred across levels and systems, including: habitat suitability characterization, movement rules, and ways of estimating uncertainty. PMID:20484232

Struve, Juliane; Lorenzen, Kai; Blanchard, Julia; Börger, Luca; Bunnefeld, Nils; Edwards, Charles; Hortal, Joaquín; MacCall, Alec; Matthiopoulos, Jason; Van Moorter, Bram; Ozgul, Arpat; Royer, François; Singh, Navinder; Yesson, Chris; Bernard, Rodolphe

2010-01-01

338

SEEK: A FORTRAN optimization program using a feasible directions gradient search

NASA Technical Reports Server (NTRS)

This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.

Savage, M.

1995-01-01

339

Global interpretation of direct Dark Matter searches after CDMS-II results

We perform a global fit to data from Dark Matter (DM) direct detection experiments, including the recent CDMS-II results. We discuss possible interpretations of the DAMA annual modulation signal in terms of spin-independent and spin-dependent DM-nucleus interactions, both for elastic and inelastic scattering. We find that for the spin-dependent inelastic scattering off protons a good fit to all data is obtained. We present a simple toy model realizing such a scenario. In all the remaining cases the DAMA allowed regions are disfavored by other experiments or suffer from severe fine tuning of DM parameters with respect to the galactic escape velocity. Finally, we also entertain the possibility that the two events observed in CDMS-II are an actual signal of elastic DM scattering, and we compare the resulting CDMS-II allowed regions to the exclusion limits from other experiments.

Kopp, Joachim; Schwetz, Thomas; Zupan, Jure

2009-12-01

340

Computer-Aided Parallelizer and Optimizer

NASA Technical Reports Server (NTRS)

The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

Jin, Haoqiang

2011-01-01

341

NSDL National Science Digital Library

Several tutorials on parallel computing. Overview of parallel computing. Porting and code parallelization. Scalar, cache, and parallel code tuning. Timing, profiling and performance analysis. Overview of IBM Regatta P690.

NCSA

342

Direct search for a ferromagnetic phase in a heavily overdoped nonsuperconducting copper oxide

The doping of charge carriers into the CuO2 planes of copper oxide Mott insulators causes a gradual destruction of antiferromagnetism and the emergence of high-temperature superconductivity. Optimal superconductivity is achieved at a doping concentration p beyond which further increases in doping cause a weakening and eventual disappearance of superconductivity. A potential explanation for this demise is that ferromagnetic fluctuations compete with superconductivity in the overdoped regime. In this case, a ferromagnetic phase at very low temperatures is predicted to exist beyond the doping concentration at which superconductivity disappears. Here we report on a direct examination of this scenario in overdoped La2-xSrxCuO4 using the technique of muon spin relaxation. We detect the onset of static magnetic moments of electronic origin at low temperature in the heavily overdoped nonsuperconducting region. However, the magnetism does not exist in a commensurate long-range ordered state. Instead it appears as a dilute concentration of static magnetic moments. This finding places severe restrictions on the form of ferromagnetism that may exist in the overdoped regime. Although an extrinsic impurity cannot be absolutely ruled out as the source of the magnetism that does occur, the results presented here lend support to electronic band calculations that predict the occurrence of weak localized ferromagnetism at high doping. PMID:20855579

Sonier, J. E.; Kaiser, C. V.; Pacradouni, V.; Sabok-Sayr, S. A.; Cochrane, C.; MacLaughlin, D. E.; Komiya, S.; Hussey, N. E.

2010-01-01

343

Making sense of the local Galactic escape speed estimates in direct dark matter searches

Direct detection (DD) of dark matter (DM) candidates in the $\\lesssim$10 GeV mass range is very sensitive to the tail of their velocity distribution. The important quantity is the maximum WIMP speed in the observer's rest frame, i.e. in average the sum of the local Galactic escape speed $v_{\\rm esc}$ and of the circular velocity of the Sun $v_c$. While the latter has been receiving continuous attention, the former is more difficult to constrain. The RAVE Collaboration has just released a new estimate of $v_{\\rm esc}$ (Piffl {\\em et al.}, 2014 --- P14) that supersedes the previous one (Smith {\\em et al.}, 2007), which is of interest in the perspective of reducing the astrophysical uncertainties in DD. Nevertheless, these new estimates cannot be used blindly as they rely on assumptions in the dark halo modeling which induce tight correlations between the escape speed and other local astrophysical parameters. We make a self-consistent study of the implications of the RAVE results on DD assuming isotropic DM velocity distributions, both Maxwellian and ergodic. Taking as references the experimental sensitivities currently achieved by LUX, CRESST-II, and SuperCDMS, we show that: (i) the exclusion curves associated with the best-fit points of P14 may be more constraining by up to $\\sim 40$% with respect to standard limits, because the underlying astrophysical correlations induce a larger local DM density; (ii) the corresponding relative uncertainties inferred in the low WIMP mass region may be moderate, down to 10-15% below 10 GeV. We finally discuss the level of consistency of these results with other independent astrophysical constraints. This analysis is complementary to others based on rotation curves.

Julien Lavalle; Stefano Magni

2015-01-22

344

Making sense of the local Galactic escape speed estimates in direct dark matter searches

NASA Astrophysics Data System (ADS)

Direct detection (DD) of dark matter (DM) candidates in the ?10 GeV mass range is very sensitive to the tail of their velocity distribution. The important quantity is the maximum weakly interacting massive particle speed in the observer's rest frame, i.e. in average the sum of the local Galactic escape speed vesc and of the circular velocity of the Sun vc. While the latter has been receiving continuous attention, the former is more difficult to constrain. The RAVE Collaboration has just released a new estimate of vesc [T. Piffl et al., Astron. Astrophys. 562, A91 (2014)] that supersedes the previous one [M. C. Smith, et al. Mon. Not. R. Astron. Soc. 379, 755 (2007)], which is of interest in the perspective of reducing the astrophysical uncertainties in DD. Nevertheless, these new estimates cannot be used blindly as they rely on assumptions in the dark halo modeling which induce tight correlations between the escape speed and other local astrophysical parameters. We make a self-consistent study of the implications of the RAVE results on DD assuming isotropic DM velocity distributions, both Maxwellian and ergodic. Taking as references the experimental sensitivities currently achieved by LUX, CRESST-II, and SuperCDMS, we show that (i) the exclusion curves associated with the best-fit points of P14 may be more constraining by up to ˜40 % with respect to standard limits, because the underlying astrophysical correlations induce a larger local DM density, and (ii) the corresponding relative uncertainties inferred in the low weakly interacting massive particle mass region may be moderate, down to 10-15% below 10 GeV. We finally discuss the level of consistency of these results with other independent astrophysical constraints. This analysis is complementary to others based on rotation curves.

Lavalle, Julien; Magni, Stefano

2015-01-01

345

Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

NASA Technical Reports Server (NTRS)

Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

2000-01-01

346

Parallel asynchronous particle swarm optimization

SUMMARY The high computational cost of complex engineering optimization problems has motivated the development of parallel optimization algorithms. A recent example is the parallel particle swarm optimization (PSO) algorithm, which is valuable due to its global search capabilities. Unfortunately, because existing parallel implementations are synchronous (PSPSO), they do not make efficient use of computational resources when a load imbalance exists. In this study, we introduce a parallel asynchronous PSO (PAPSO) algorithm to enhance computational efficiency. The performance of the PAPSO algorithm was compared to that of a PSPSO algorithm in homogeneous and heterogeneous computing environments for small- to medium-scale analytical test problems and a medium-scale biomechanical test problem. For all problems, the robustness and convergence rate of PAPSO were comparable to those of PSPSO. However, the parallel performance of PAPSO was significantly better than that of PSPSO for heterogeneous computing environments or heterogeneous computational tasks. For example, PAPSO was 3.5 times faster than was PSPSO for the biomechanical test problem executed on a heterogeneous cluster with 20 processors. Overall, PAPSO exhibits excellent parallel performance when a large number of processors (more than about 15) is utilized and either (1) heterogeneity exists in the computational task or environment, or (2) the computation-to-communication time ratio is relatively small. PMID:17224972

Koh, Byung-Il; George, Alan D.; Haftka, Raphael T.; Fregly, Benjamin J.

2006-01-01

347

Observations of cosmic ray arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Veron-Cetty Veron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energy resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three methods can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. Using data taken from January 1, 2004 to July 31, 2010 we examined the 20, 30, ..., 110 highest energy events with a corresponding minimum energy threshold of about 51 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.

Abreu, P.; ,

2012-01-01

348

NASA Astrophysics Data System (ADS)

The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Preliminary results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while most of the ANN solutions results to be Pareto-dominated by the RBF ones.

Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca

2014-05-01

349

Template based parallel checkpointing in a massively parallel computer system

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

Archer, Charles Jens (Rochester, MN); Inglett, Todd Alan (Rochester, MN)

2009-01-13

350

Toward Parallel Document Clustering

A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

Mogill, Jace A.; Haglin, David J.

2011-09-01

351

Parallelization of the SIR code

NASA Astrophysics Data System (ADS)

A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jur?ak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

352

Parallel pivoting combined with parallel reduction

NASA Technical Reports Server (NTRS)

Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

Alaghband, Gita

1987-01-01

353

Special parallel processing workshop

This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

NONE

1994-12-01

354

NASA Technical Reports Server (NTRS)

Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

1991-01-01

355

Parallel-Access Alinement Network Using Barrel Switches

NASA Technical Reports Server (NTRS)

Practical version of parallel-access alinement network utilizes two barrel switches for interfacing N parallel memory modules with N parallel processing elements. Switches are interconnected where 17 memory ports (MP's) are connected to 17 processor ports (PP's). Network uses two electronic barrel switches to direct data flow in parallel data-processing system. Each switch can shift a multibit parallel input a predetermined number of places to left or right, end off, or end around in one clock-pulse.

Barnes, George H.

1982-01-01

356

NASA Astrophysics Data System (ADS)

A method for finding nodal surfaces on the fly in importance-sampled, fixed-node diffusion Monte Carlo calculations is described. The procedure relies on minimizing the difference between the nodal functions of the guiding wave function, ?T, and Hˆ ?T, where Hˆ is the Hamiltonian. This is done by allowing the trial function to depend on a set of parameters whose values are then optimized using a parallel genetic algorithm (e.g., the Pikaia code developed in astrophysics). Application is made to the calculation of several excited states of a non-integrable two-dimensional quartic oscillator and to excited states of the He-C2H2 complex.

Wairegi, Angeline; Farrelly, David

2015-01-01

357

Filtered beam search in scheduling†

Beam search is a technique for searching decision trees, particularly where the solution space is vast. The technique involves systematically developing a small number of solutions in parallel so as to attempt to maximize the probability of finding a good solution with minimal search effort. In this paper, we systematically study the performance behaviour of beam search with other heuristic

PENG SI OW; THOMAS E. MORTON

1988-01-01

358

A search is presented for direct top squark pair production using events with at least two leptons including a same-flavour opposite-sign pair with invariant mass consistent with the Z boson mass, jets tagged as originating ...

Taylor, Frank E.

359

A search is presented for direct top squark pair production in final states with one isolated electron or muon, jets, and missing transverse momentum in proton-proton collisions at ?s=7??TeV. The measurement is based on ...

Taylor, Frank E.

360

The existence of a network of brain regions which are activated when one undertakes a difficult visual search task is well established. Two primary nodes on this network are right posterior parietal cortex (rPPC) and right frontal eye fields. Both have been shown to be involved in the orientation of attention, but the contingency that the activity of one of these areas has on the other is less clear. We sought to investigate this question by using transcranial direct current stimulation (tDCS) to selectively decrease activity in rPPC and then asking participants to perform a visual search task whilst undergoing functional magnetic resonance imaging. Comparison with a condition in which sham tDCS was applied revealed that cathodal tDCS over rPPC causes a selective bilateral decrease in frontal activity when performing a visual search task. This result demonstrates for the first time that premotor regions within the frontal lobe and rPPC are not only necessary to carry out a visual search task, but that they work together to bring about normal function. PMID:24705681

Ellison, Amanda; Ball, Keira L.; Moseley, Peter; Dowsett, James; Smith, Daniel T.; Weis, Susanne; Lane, Alison R.

2014-01-01

361

Computing Flow Transition On Parallel Processors

NASA Technical Reports Server (NTRS)

Parallel algorithm developed on multiple-microprocessor computer. Program initiated to develop computer codes capable of directly simulating and mathematically modeling transition process at mach numbers ranging from subsonic to hypersonic. Parallel computers potentially offer reduction of processing time; processing time inversely proportional to number of available processors.

Bokhari, S.; Erlebacher, G.; Hussaini, M. Y.

1993-01-01

362

2-D motion estimation using two parallel receive beams.

We describe a method for estimating 2-D target motion using ultrasound. The method is based on previous ensemble tracking techniques, which required at least four parallel receive beams and 2-D pattern matching. In contrast, the method described requires only two parallel receive beams and 1-D pattern matching. Two 1-D searches are performed, one in each lateral direction. The direction yielding the best match indicates the lateral direction of motion. Interpolation provides sub-pixel magnitude resolution. We compared the two beam method with the four beam method using a translating speckle target at three different parallel beam steering angles and transducer angles of 0, 45, and 90 degrees. The largest differences were found at 90 degrees, where the two beam method was generally more accurate and precise than the four beam method and also less prone to directional errors at small translations. We also examined the performance of both methods in a laminar flow phantom. Results indicated that the two beam method was more accurate in measuring the flow angle when the flow velocity was small. Computer simulations supported the experimental findings. The poorer performance of the four beam method was attributed to differences in correlation among the parallel beams. Specifically, center beams 2 and 3 correlated better with each other than with the outer beams. Because the four beam method used a comparison of a kernel region in beam pair 2-3 with two different beam pairs 1-2 and 3-4, the 2-to-1 and 3-to-4 components of this comparison increased the incidence of directional errors, especially at small translations. The two beam method used a comparison between only two beams and so was not subject to this source of error. Finally, the two beam method did not require amplitude normalization, as was necessary for the four beam method, when the two beams were chosen symmetric to the transmit axis. We conclude that two beam ensemble tracking can accurately estimate motion using only two parallel receive beams. PMID:11370353

Bohs, L N; Gebhart, S C; Anderson, M E; Geiman, B J; Trahey, G E

2001-03-01

363

SearchPad: explicit capture of search context to support Web search

Experienced users who query search engines have a complex behavior. They explore many topics in parallel, experiment with query variations, consult multiple search engines, and gather information over many sessions. In the process they need to keep track of search context — namely useful queries and promising result links, which can be hard. We present an extension to search engines

Krishna Bharat

2000-01-01

364

NASA Astrophysics Data System (ADS)

The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two near Edmonton, Alberta, and one each near Lethbridge, Alberta, Victoria, British Columbia, and Delta, British Columbia. The optimization algorithm used hourly data for the load demand, WTG output power/area, and PV module output power. These hourly data were in two sets: seasonal (summer and winter values separated) and total (summer and winter values combined). The costs for the WTG's, PV modules, batteries, and auxiliary generator fuel were full market values. To examine the effects of price discounts or tax incentives, these values were lowered to 25% of the full costs for the energy sources and two-thirds of the full cost for agricultural fuel. Annual costs for a renewable energy system depended upon the load, location, component costs, and which data set (seasonal or total) was used. For one Edmonton load, the cost for a renewable energy system consisting of 27.01 m2 of WTG area, 14 PV modules, and 18 batteries (full price, total data set) was 6873/year. For Lethbridge, a system with 22.85 m2 of WTG area, 47 PV modules, and 5 batteries (reduced prices, seasonal data set) cost 2913/year. The performance of renewable energy systems based on the obtained results was tested in a simulation using load and weather data for selected days. Test results for one Edmonton load showed that the simulations for most of the systems examined ran for at least 17 hours per day before failing due to either an excessive load on the auxiliary generator or a battery constraint being violated. Additional testing indicated that increasing the generator capacity and reducing the maximum allowed battery charge current during the time of the day at which these failures occurred allowed the simulation to successfully operate.

Jatzeck, Bernhard Michael

2000-10-01

365

Integrated Task and Data Parallel Programming

NASA Technical Reports Server (NTRS)

This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.

Grimshaw, A. S.

1998-01-01

366

Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.

W. Daniel Hillis; Guy L. Steele Jr.

1986-01-01

367

Real-time trajectory optimization on parallel processors

NASA Technical Reports Server (NTRS)

A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

Psiaki, Mark L.

1993-01-01

368

Characterizing the parallelism in rule-based expert systems

A brief review of two classes of rule-based expert systems is presented, followed by a detailed analysis of potential sources of parallelism at the production or rule level, the subrule level (including match, select, and act parallelism), and at the search level (including AND, OR, and stream parallelism). The potential amount of parallelism from each source is discussed and characterized in terms of its granularity, inherent serial constraints, efficiency, speedup, dynamic behavior, and communication volume, frequency, and topology. Subrule parallelism will yield, at best, two- to tenfold speedup, and rule level parallelism will yield a modest speedup on the order of 5 to 10 times. Rule level can be combined with OR, AND, and stream parallelism in many instances to yield further parallel speedups.

Douglass, R.J.

1984-01-01

369

The quest for efficient parallel algorithms for graph related problems necessitates not only fast computational schemes but also requires insights into their inherent structures that lend themselves to elegant problem solving methods. Towards this objective efficient parallel algorithms on a class of hypergraphs called acyclic hypergraphs and directed hypergraphs are developed in this thesis. In this thesis, first, the author presents efficient parallel algorithms for the following problems on graphs. (1) Determining whether a graph is strongly chordal, ptolemaic, or a block graph. If the graph is strongly chordal, determine the strongly perfect vertex elimination ordering. (2) Determining the minimal set of edges needed to make an arbitrary graph strong chordal, ptolemaic, or a block graph. (3) Determining the minimum cardinality dominating set, connected dominating set, total dominating set, and the domatic number of a strongly chordal graph. Secondly, he shows that the query implication problem (Q{sub 1} {yields} Q{sub 2}) on two queries, which is to determine whether the data retrieved by query Q{sub 1} is always a subset of the data retrieved by query Q{sub 2}, is not even in NP and in fact complete in {Pi}{sub 2{sup p}}. Thirdly, he develops efficient parallel algorithms for manipulating directed hypergraphs H such as finding a directed path in H, closure of H, and minimum equivalent hypergraph of H. Finally, he also presents an efficient parallel algorithm for multidimensional range search.

Radhakrishnan, S.

1990-01-01

370

Message passing with parallel queue traversal

In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.

Underwood, Keith D. (Albuquerque, NM); Brightwell, Ronald B. (Albuquerque, NM); Hemmert, K. Scott (Albuquerque, NM)

2012-05-01

371

Parallel Global Aircraft Configuration Design Space Exploration

The preliminary design space exploration for large, interdisciplinary engineering problems is often a difficult and time-consuming task. General techniques are needed that efficiently and methodically search the design space. This work focuses on the use of parallel load balancing techniques integrated with a global optimizer to reduce the computational time of the design space exploration. The method is applied to

CHUCK A. BAKER; LAYNE T. WATSON; BERNARD GROSSMAN; WILLIAM H. MASON; RAPHAEL T. HAFTKA

1999-01-01

372

Parallel Approaches for the Artificial Bee Colony Algorithm

\\u000a This work investigates the parallelization of the Artificial Bee Colony Algorithm. Besides a sequential version enhanced with\\u000a local search, we compare three parallel models: master-slave, multi-hive with migrations, and hybrid hierarchical. Extensive\\u000a experiments were done using three numerical benchmark functions with a high number of variables. Statistical results indicate\\u000a that intensive local search improves the quality of solutions found and,

Rafael Stubs Parpinelli; César Manuel Vargas Benitez; Heitor Silvério Lopes

373

Parallel Shortest Lattice Vector Enumeration on Graphics Cards

of parallel computing systems. To illustrate the algorithm, it was implemented on graphics cards using CUDA, exhaustive search 1 Introduction Lattice-based cryptosystems are assumed to be secure against quantum com, especially to implementation aspects. In this paper we consider parallelization and special hardware

374

Parallelizing Timed Petri Net simulations

NASA Technical Reports Server (NTRS)

The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

Nicol, David M.

1993-01-01

375

A Parallel Algorithm for the Vehicle Routing Problem

The vehicle routing problem (VRP) is a dicult and well-studied combinatorial optimization problem. We develop a parallel algorithm for the VRP that combines a heuristic local search improvement procedure with integer programming. We run our parallel algorithm with as many as 129 processors and are able to quickly nd high-quality solutions to standard benchmark problems. We assess the impact of parallelism by analyzing our procedure's performance under a number of dierent scenarios.

Groer, Christopher S [ORNL] [ORNL; Golden, Bruce [University of Maryland] [University of Maryland; Edward, Wasil [American University] [American University

2011-01-01

376

NASA Astrophysics Data System (ADS)

A search is presented for direct top squark pair production using events with at least two leptons including a same-flavour opposite-sign pair with invariant mass consistent with the boson mass, jets tagged as originating from -quarks and missing transverse momentum. The analysis is performed with proton-proton collision data at collected with the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of 20.3 fb. No excess beyond the Standard Model expectation is observed. Interpretations of the results are provided in models based on the direct pair production of the heavier top squark state () followed by the decay to the lighter top squark state () via , and for pair production in natural gauge-mediated supersymmetry breaking scenarios where the neutralino () is the next-to-lightest supersymmetric particle and decays producing a boson and a gravitino () via the process.

Aad, G.; Abbott, B.; Abdallah, J.; Khalek, S. Abdel; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmad, A.; Ahmadov, F.; Aielli, G.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Verzini, M. J. Alconada; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Alvarez Gonzalez, B.; Alviggi, M. G.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Ask, S.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Backus Mayes, J.; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, S.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Banfi, D.; Bangert, A.; Bannoura, A. A. E.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; Guimarães da Costa, J. Barreiro; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belloni, A.; Beloborodova, O. L.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Noccioli, E. Benhar; Garcia, J. A. Benitez; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertolucci, F.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; Bilbao De Mendizabal, J.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Boddy, C. R.; Boehler, M.; Boek, J.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Branchini, P.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, G.; Brown, J.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.

2014-06-01

377

Automatic Management of Parallel and Distributed System Resources

NASA Technical Reports Server (NTRS)

Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

1990-01-01

378

1 Search for direct CP violation in \\Lambda and \\Xi hyperon decays C. G. White, a R. A. Burnstein for direct CP violation in \\Xi \\Gamma ( Â¯ \\Xi + ) and \\Lambda ( Â¯ \\Lambda) decays is underway at FNAL. Experiment E871 (HyperCP) intends to perform a precision measurement of the angular distribution of protons

Fermilab Experiment E871

379

DC Circuits: Parallel Resistances

NSDL National Science Digital Library

In this interactive learning activity, students will learn about parallel circuits. They will measure and calculate the resistance of parallel circuits and answer several questions about the example circuit shown.

380

Parallel flow diffusion battery

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, H.C.; Cheng, Y.S.

1984-01-01

381

Parallel flow diffusion battery

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

1984-08-07

382

NSDL National Science Digital Library

* Redundant disk array architectures,* Fault tolerance issues in parallel I/O systems,* Caching and prefetching,* Parallel file systems,* Parallel I/O systems, * Parallel I/O programming paradigms, * Parallel I/O applications and environments, * Parallel programming with parallel I/O

Amy Apon

383

This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

Fan, W.C.; Halbleib, J.A. Sr.

1996-09-01

384

NASA Astrophysics Data System (ADS)

CFD or Computational Fluid Dynamics is one of the scientific disciplines that has always posed new challenges to the capabilities of the modern, ultra-fast supercomputers, and now to the even faster parallel computers. For applications where number crunching is of primary importance, there is perhaps no escaping parallel computers since sequential computers can only be (as projected) as fast as a few gigaflops and no more, unless, of course, some altogether new technology appears in future. For parallel computers, on the other hand, there is no such limit since any number of processors can be made to work in parallel. Computationally demanding CFD codes and parallel computers are therefore soul-mates, and will remain so for all foreseeable future. So much so that there is a separate and fast-emerging discipline that tackles problems specific to CFD as applied to parallel computers. For some years now, there is an international conference on parallel CFD. So, one can indeed say that parallel CFD has arrived. To understand how CFD codes are parallelized, one must understand a little about how parallel computers function. Therefore, in what follows we will first deal with parallel computers, how a typical CFD code (if there is one such) looks like, and then the strategies of parallelization.

Basu, A. J.

1994-10-01

385

Adaptive Parallelism and Piranha

. Under "adaptive parallelism," the set of processors executing a parallel programmay grow or shrink as the program runs. Potential gains include the capacity to runa parallel program on the idle workstations in a conventional LAN---processors join thecomputation when they become idle, and withdraw when their owners need them---and tomanage the nodes of a dedicated multiprocessor efficiency. Experience to date

Nicholas Carriero; Eric Freeman; David Gelernter; David Kaminsky

1995-01-01

386

NASA Technical Reports Server (NTRS)

This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

Nicol, David; Fujimoto, Richard

1992-01-01

387

Decomposing the Potentially Parallel

NSDL National Science Digital Library

This course provides an introduction to the issues involved in decomposing problems onto parallel machines, and to the types of architectures and programming styles commonly found in parallel computers. The list of topics discussed includes types of decomposition, task farming, regular domain decomposition, unbalanced grids, and parallel molecular dynamics.

Elspeth Minty, Robert Davey, Alan Simpson, David Henty

388

Coarse grain parallel finite element simulations for incompressible flows

Parallel simulation of incompressible fluid flows is considered on networks of homogeneous work-stations. Coarse-grain parallelization of a Taylor-Galerkin\\/pressure-correction finite element algorithm are discussed, taking into account network communication costs. The main issues include the parallelization of system assembly, and iterative and direct solvers, that are of com-mon interest to finite element and general numerical computation. The parallelization strategies are implemented

P. W. Grant; M. F. Webster; X. Zhang

1998-01-01

389

Direct Search for Dirac Magnetic Monopoles in pp¯ Collisions at s=1.96TeV

We search for pair-produced Dirac magnetic monopoles in 35.7pb-1 of proton-antiproton collisions at s=1.96TeV with the Collider Detector at Fermilab (CDF). We find no monopole candidates corresponding to a 95% confidence-level cross-section limit sigma<0.2pb for a monopole with mass between 200 and 700GeV\\/c2. Assuming a Drell-Yan pair-production mechanism, we set a mass limit m>360GeV\\/c2.

A. Abulencia; D. Acosta; J. Adelman; T. Affolder; T. Akimoto; M. G. Albrow; D. Ambrose; S. Amerio; D. Amidei; A. Anastassov; K. Anikeev; A. Annovi; J. Antos; M. Aoki; G. Apollinari; J.-F. Arguin; T. Arisawa; A. Artikov; W. Ashmanskas; A. Attal; F. Azfar; P. Azzi-Bacchetta; P. Azzurri; N. Bacchetta; H. Bachacou; W. Badgett; A. Barbaro-Galtieri; V. E. Barnes; B. A. Barnett; S. Baroiant; V. Bartsch; G. Bauer; F. Bedeschi; S. Behari; S. Belforte; G. Bellettini; J. Bellinger; A. Belloni; E. Ben-Haim; D. Benjamin; A. Beretvas; J. Beringer; T. Berry; A. Bhatti; M. Binkley; D. Bisello; M. Bishai; R. E. Blair; C. Blocker; K. Bloom; B. Blumenfeld; A. Bocci; A. Bodek; V. Boisvert; G. Bolla; A. Bolshov; D. Bortoletto; J. Boudreau; S. Bourov; A. Boveia; B. Brau; C. Bromberg; E. Brubaker; J. Budagov; H. S. Budd; S. Budd; K. Burkett; G. Busetto; P. Bussey; K. L. Byrum; S. Cabrera; M. Campanelli; M. Campbell; F. Canelli; A. Canepa; D. Carlsmith; R. Carosi; S. Carron; A. Carter; M. Casarsa; A. Castro; P. Catastini; D. Cauz; M. Cavalli-Sforza; A. Cerri; L. Cerrito; S. H. Chang; J. Chapman; Y. C. Chen; M. Chertok; G. Chiarelli; G. Chlachidze; F. Chlebana; I. Cho; K. Cho; D. Chokheli; J. P. Chou; P. H. Chu; S. H. Chuang; K. Chung; W. H. Chung; Y. S. Chung; M. Ciljak; C. I. Ciobanu; M. A. Ciocci; A. Clark; D. Clark; M. Coca; A. Connolly; M. E. Convery; J. Conway; B. Cooper; K. Copic; M. Cordelli; G. Cortiana; A. Cruz; J. Cuevas; R. Culbertson; D. Cyr; S. Daronco; S. D'Auria; M. D'Onofrio; D. Dagenhart; P. de Barbaro; S. de Cecco; A. Deisher; G. de Lentdecker; M. Dell'Orso; S. Demers; L. Demortier; J. Deng; M. Deninno; D. de Pedis; P. F. Derwent; C. Dionisi; J. Dittmann; P. Dituro; C. Dörr; A. Dominguez; S. Donati; M. Donega; P. Dong; J. Donini; T. Dorigo; S. Dube; K. Ebina; J. Efron; J. Ehlers; R. Erbacher; D. Errede; S. Errede; R. Eusebi; H. C. Fang; S. Farrington; I. Fedorko; W. T. Fedorko; R. G. Feild; M. Feindt; J. P. Fernandez; R. Field; G. Flanagan; L. R. Flores-Castillo; A. Foland; S. Forrester; G. W. Foster; M. Franklin; J. C. Freeman; Y. Fujii; I. Furic; A. Gajjar; M. Gallinaro; J. Galyardt; J. E. Garcia; M. Garcia Sciverez; A. F. Garfinkel; C. Gay; H. Gerberich; E. Gerchtein; D. Gerdes; S. Giagu; P. Giannetti; A. Gibson; K. Gibson; C. Ginsburg; K. Giolo; M. Giordani; M. Giunta; G. Giurgiu; V. Glagolev; D. Glenzinski; M. Gold; N. Goldschmidt; J. Goldstein; G. Gomez; G. Gomez-Ceballos; M. Goncharov; O. González; I. Gorelov; A. T. Goshaw; Y. Gotra; K. Goulianos; A. Gresele; M. Griffiths; S. Grinstein; C. Grosso-Pilcher; U. Grundler; J. Guimaraes da Costa; C. Haber; S. R. Hahn; K. Hahn; E. Halkiadakis; B.-Y. Han; R. Handler; F. Happacher; K. Hara; M. Hare; S. Harper; R. F. Harr; R. M. Harris; K. Hatakeyama; J. Hauser; C. Hays; H. Hayward; A. Heijboer; B. Heinemann; J. Heinrich; M. Hennecke; M. Herndon; J. Heuser; D. Hidas; C. S. Hill; D. Hirschbuehl; A. Hocker; A. Holloway; S. Hou; M. Houlden; S.-C. Hsu; B. T. Huffman; R. E. Hughes; J. Huston; K. Ikado; J. Incandela; G. Introzzi; M. Iori; Y. Ishizawa; A. Ivanov; B. Iyutin; E. James; D. Jang; B. Jayatilaka; D. Jeans; H. Jensen; E. J. Jeon; M. Jones; K. K. Joo; S. Y. Jun; T. R. Junk; T. Kamon; J. Kang; M. Karagoz-Unel; P. E. Karchin; Y. Kato; Y. Kemp; R. Kephart; U. Kerzel; V. Khotilovich; B. Kilminster; D. H. Kim; H. S. Kim; J. E. Kim; M. J. Kim; S. B. Kim; S. H. Kim; Y. K. Kim; M. Kirby; L. Kirsch; S. Klimenko; M. Klute; B. Knuteson; B. R. Ko; H. Kobayashi; K. Kondo; D. J. Kong; J. Konigsberg; K. Kordas; A. Korytov; A. V. Kotwal; A. Kovalev; J. Kraus; I. Kravchenko; M. Kreps; A. Kreymer; J. Kroll; N. Krumnack; M. Kruse; V. Krutelyov; S. E. Kuhlmann; Y. Kusakabe; S. Kwang; A. T. Laasanen; S. Lai; S. Lami; S. Lammel; M. Lancaster; R. L. Lander; K. Lannon; A. Lath; G. Latino; I. Lazzizzera; C. Lecci; T. Lecompte; J. Lee; S. W. Lee; R. Lefèvre; N. Leonardo; S. Leone; S. Levy; J. D. Lewis; K. Li; C. Lin; M. Lindgren; E. Lipeles; T. M. Liss; A. Lister; D. O. Litvintsev; T. Liu; Y. Liu; N. S. Lockyer; A. Loginov; M. Loreti; P. Loverre; R.-S. Lu; D. Lucchesi; P. Lujan; P. Lukens; G. Lungu; L. Lyons; J. Lys; R. Lysak; E. Lytken; P. Mack; D. MacQueen; R. Madrak; K. Maeshima; P. Maksimovic; G. Manca; F. Margaroli; R. Marginean; C. Marino; A. Martin; M. Martin; V. Martin; M. Martínez; T. Maruyama; H. Matsunaga; M. E. Mattson; R. Mazini; P. Mazzanti; K. S. McFarland; D. McGivern; P. McIntyre; P. McNamara; R. McNulty; A. Mehta; S. Menzemer; A. Menzione; P. Merkel; C. Mesropian; A. Messina; M. von der Mey; T. Miao; N. Miladinovic; J. Miles; R. Miller; J. S. Miller; C. Mills; M. Milnik; R. Miquel; S. Miscetti; G. Mitselmakher; A. Miyamoto; N. Moggi; B. Mohr; R. Moore; M. Morello; P. Movilla Fernandez; J. Mülmenstädt; A. Mukherjee; M. Mulhearn; Th. Muller; R. Mumford; P. Murat; J. Nachtman; S. Nahn; I. Nakano; A. Napier; D. Naumov; V. Necula; C. Neu; M. S. Neubauer; J. Nielsen

2006-01-01

390

Parallel Atomistic Simulations

Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

HEFFELFINGER,GRANT S.

2000-01-18

391

Background: Circulating tumour cells (CTCs) can provide information on patient prognosis and treatment efficacy. However, there is no universal method to detect CTC currently available. Here, we compared the performance of two CTC detection systems based on the expression of the EpCAM antigen (CellSearch assay) or on cell size (ISET assay). Methods: Circulating tumour cells were enumerated in 60 patients with metastatic carcinomas of breast, prostate and lung origins using CellSearch according to the manufacturer's protocol and ISET by studying cytomorphology and immunolabelling with anti-cytokeratin or lineage-specific antibodies. Results: Concordant results were obtained in 55% (11 out of 20) of the patients with breast cancer, in 60% (12 out of 20) of the patients with prostate cancer and in only 20% (4 out of 20) of lung cancer patients. Conclusion: Our results highlight important discrepancies between the numbers of CTC enumerated by both techniques. These differences depend mostly on the tumour type. These results suggest that technologies limiting CTC capture to EpCAM-positive cells, may present important limitations, especially in patients with metastatic lung carcinoma. PMID:21829190

Farace, F; Massard, C; Vimond, N; Drusch, F; Jacques, N; Billiot, F; Laplanche, A; Chauchereau, A; Lacroix, L; Planchard, D; Le Moulec, S; André, F; Fizazi, K; Soria, J C; Vielh, P

2011-01-01

392

Parallel and Serial Processes in Motion Detection

Apparent motion was used to explore humans' ability to perceive the direction of motion in the visual field. A marked qualitative difference in this ability was found between short- and long-range motion. For short-range motion, the detection of the direction of motion is characterized by parallel operation over a wide visual field (that is, detection performance is independent of the

Miri Dick; Shimon Ullman; Dov Sagi

1987-01-01

393

Multiband Phase Constrained Parallel MRI

Purpose Parallel MRI methods are typically associated with a degradation of the signal-to-noise ratio (SNR). High scan time reduction factors are therefore restricted to applications with high intrinsic SNR. One possibility to increase the intrinsic SNR is to simultaneously excite several slices by means of multiband radio-frequency (RF) pulses and subsequently separate the slices by parallel MRI reconstruction algorithms. However, the separation of closely spaced slices may suffer from severe noise amplification when there is insufficient coil sensitivity variation along the slice direction. The purpose of this work is to apply a phase-constrained reconstruction for multiband experiments in order to minimize the noise amplification. Methods Pre-defined phase differences between neighboring slices are induced and slice separation is performed by a phase-constrained parallel MRI reconstruction. Phase differences between neighboring slices are tailored to achieve optimal slice separation with minimized noise amplification. The potential of the method is demonstrated through multiband in-vivo experiments. Results Noise amplification in multiband phase-constrained reconstructions is significantly reduced in comparison to standard multiband reconstruction when the phase difference between neighboring slices (distance = 12 mm) is 90°. Conclusions Multiband phase constrained parallel MRI has the potential for accelerated multi-slice imaging with an improved SNR performance. PMID:23440994

Blaimer, Martin; Choli, Morwan; Jakob, Peter M.; Griswold, Mark A.; Breuer, Felix A.

2013-01-01

394

PARASPICE: A Parallel Circuit Simulator for Shared-Memory Multiprocessors

This paper presents a general approach to parallelizing direct method circuit simulation. The approach extracts parallel tasks at the algorithmic level for each compute-intensive module and therefore is suitable for a wide range of shared-memory multiprocessors. The implementation of the approach in SPICE2 resulted in a portable parallel direct circuit simulator, PARASPICE. The superior performance of PARASPICE is demonstrated on

Gung-chung Yang

1990-01-01

395

Parallel-Coupled Micro-Macro Actuators

This paper presents a new actuator system consisting of a micro- actuator and a macro-actuator coupled in parallel via a compliant transmission. The system is called the parallel-coupled micro-macro actuator, or PaCMMA.In this system, the micro-actuator is capable of high-bandwidth force control owing to its low mass and direct-drive connection to the output shaft. The compliant transmission of the macro-actuator

John B. Morrell; J. Kenneth Salisbury

1998-01-01

396

The Dark Matter Time Projection Chamber (DMTPC) is a low pressure (75 Torr CF4) 10 liter detector capable of measuring the vector direction of nuclear recoils with the goal of directional dark matter detection. In this paper we present the first dark matter limit from DMTPC. In an analysis window of 80-200 keV recoil energy, based on a 35.7 g-day exposure, we set a 90% C.L. upper limit on the spin-dependent WIMP-proton cross section of 2.0 x 10^{-33} cm^{2} for 115 GeV/c^2 dark matter particle mass.

S. Ahlen; J. B. R. Battat; T. Caldwell; C. Deaconu; D. Dujmic; W. Fedus; P. Fisher; F. Golub; S. Henderson; A. Inglis; A. Kaboth; G. Kohse; R. Lanza; A. Lee; J. Lopez; J. Monroe; T. Sahin; G. Sciolla; N. Skvorodnev; H. Tomita; H. Wellenstein; I. Wolfe; R. Yamamoto; H. Yegoryan

2010-12-09

397

Electron diffusion in a liquid xenon time projection chamber has recently been used to infer the z coordinate of a particle interaction, from the width of the electron signal. The goal of this technique is to reduce the background event rate by discriminating edge events from bulk events. Analyses of dark matter search data which employ it would benefit from increased longitudinal electron diffusion. We show that a significant increase is expected if the applied electric field is decreased. This observation is trivial to implement but runs contrary to conventional wisdom and practice. We also extract a first measurement of the longitudinal diffusion coefficient, and confirm the expectation that electron diffusion in liquid xenon is highly anisotropic under typical operating conditions.

Sorensen, P

2011-02-14

398

NASA Astrophysics Data System (ADS)

We report the results of CCD photometric observations in the direction of the Coma Berenices and Upgren 1 open clusters with the aim at searching for new short-period variable stars. A total of 35 stars were checked for variability. As a result of this search the star designated in the SIMBAD database as Melotte 111 AV 1224 was found to be a new eclipsing binary star. Follow-up Strömgren photometric and spectroscopic observations allowed us to derive the spectral type, distance, reddening and effective temperature of the star. A preliminary analysis of the binary light curve was performed and the parameters of the orbital system were derived. From the derived physical parameters we conclude that Melotte 111 AV 1224 is most likely a W-UMa eclipsing binary that is not a member of the Coma Berenices open cluster. On other hand, we did not find evidence of brightness variations in the stars NSV 5612 and NSV 5615 previously catalogued as variable stars in Coma Berenices open cluster.

Fox Machado, L.; Michel, R.; Alvarez, M.; Peña, J. H.

2015-01-01

399

Parallel processing of numerical transport algorithms

The multigroup, discrete ordinates representation for the linear transport equation enjoys widespread computational use and popularity. Serial solution schemes and numerical algorithms developed over the years provide a timely framework for parallel extension. On the Denelcor HEP, we investigate the parallel structure and extension of a number of standard S/sub n/ approaches. Concurrent inner sweeps, coupled acceleration techniques, synchronized inner-outer loops, and chaotic iteration are described, and results of computations are contrasted. The multigroup representation and serial iteration methods are also detailed. The basic iterative S/sub n/ method lends itself to parallel tasking, portably affording an effective medium for performing transport calculations on future architectures. This analysis represents a first attempt to extend serial S/sub n/ algorithms to parallel environments and provides good baseline estimates on ease of parallel implementation, relative algorithm efficiency, comparative speedup, and some future directions. We find basic inner-outer and chaotic iteration strategies both easily support comparably high degrees of parallelism. Both accommodate parallel rebalance and diffusion acceleration and appear as robust and viable parallel techniques for S/sub n/ production work.

Wienke, B.R.; Hiromoto, R.E.

1984-01-01

400

Directly produced dark matter is searched for in proton-proton collision data, collected by the CMS experiment at a center-of-mass energy $\\sqrt{s}\\,=\\,8$ TeV. The dataset corresponds to an integrated luminosity of 18.8~fb$^{-1}$. Events with at least two jets and no isolated leptons are studied. The razor variables are used to quantify the transverse balance of the jet momenta. The study is performed separately for events with and without jets originating from b quarks. The observed data yields are consistent with the expected background predictions. Exclusion limits at 90\\% confidence level on dark matter production are derived for different assumptions on the production mechanism.

CMS Collaboration

2015-01-01

401

Pendlebury $\\textit{et al.}$ [Phys. Rev. A $\\textbf{70}$, 032102 (2004)] were the first to investigate the role of geometric phases in searches for an electric dipole moment (EDM) of elementary particles based on Ramsey-separated oscillatory field magnetic resonance with trapped ultracold neutrons and comagnetometer atoms. Their work was based on the Bloch equation and later work using the density matrix corroborated the results and extended the scope to describe the dynamics of spins in general fields and in bounded geometries. We solve the Schr\\"odinger equation directly for cylindrical trap geometry and obtain a full description of EDM-relevant spin behavior in general fields, including the short-time transients and vertical spin oscillation in the entire range of particle velocities. We apply this method to general macroscopic fields and to the field of a microscopic magnetic dipole.

A. Steyerl; C. Kaufman; G. Müller; S. S. Malik; A. M. Desai; R. Golub

2014-05-23

402

In this paper we study a class of supersymmetric models with non-universal gaugino masses that arise from a mixture of SU(5) singlet and non-singlet representations, i.e. a combination of 1, 24, 75 and 200. Based on these models we calculate the expected dark matter signatures within the linear combination 1 ? 24 ? 75 ? 200. We confront the model predictions with the detected boson as well as current experimental limits from selected indirect and direct dark matter search experiments ANTARES respective IceCube and XENON. We comment on the detection/exclusion capability of the future XENON 1t project. For the investigated parameter span we could not find a SU(5) singlet model that fulfils the Higgs mass and the relic density constraint. In contrary, allowing a mixture of 1 ? 24 ? 75 ? 200 enables a number of models fulfilling these constraints.

Spies, A.; Anton, G., E-mail: andreas.spies@physik.uni-erlangen.de, E-mail: gisela.anton@physik.uni-erlangen.de [Erlangen Centre for Astroparticle Physics, Department of Physics, University of Erlangen-Nuremberg, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany)

2013-06-01

403

Background Underserved children, particularly girls and those in urban communities, do not meet the recommended physical activity guidelines (>60 min of daily physical activity), and this behavior can lead to obesity. The school years are known to be a critical period in the life course for shaping attitudes and behaviors. Children look to schools for much of their access to physical activity. Thus, through the provision of appropriate physical activity programs, schools have the power to influence apt physical activity choices, especially for underserved children where disparities in obesity-related outcomes exist. Objectives To evaluate the impact of a nurse directed, coordinated, culturally sensitive, school-based, family-centered lifestyle program on activity behaviors and body mass index. Design, settings and participants: This was a parallel group, randomized controlled trial utilizing a community-based participatory research approach, through a partnership with a University and 5 community schools. Participants included 251 children ages 8–12 from elementary schools in urban, low-income neighborhoods in Los Angeles, USA. Methods The intervention included Kids N Fitness©, a 6-week program which met weekly to provide 45 min of structured physical activity and a 45 min nutrition education class for parents and children. Intervention sites also participated in school-wide wellness activities, including health and counseling services, staff professional development in health promotion, parental education newsletters, and wellness policies for the provision of healthy foods at the school. The Child and Adolescent Trial for Cardiovascular Health School Physical Activity and Nutrition Student Questionnaire measured physical activity behavior, including: daily physical activity, participation in team sports, attending physical education class, and TV viewing/computer game playing. Anthropometric measures included height, weight, body mass index, resting blood pressure, and waist circumference. Measures were collected at baseline, completion of the intervention phase (4 months), and 12 months post-intervention. Results Significant results for students in the intervention, included for boys decreases in TV viewing; and girls increases in daily physical activity, physical education class attendance, and decreases in body mass index z-scores from baseline to the 12 month follow-up. Conclusions Our study shows the value of utilizing nurses to implement a culturally sensitive, coordinated, intervention to decrease disparities in activity and TV viewing among underserved girls and boys. PMID:23021318

Wright, Kynna; Giger, Joyce Newman; Norris, Keth; Suro, Zulma

2013-01-01

404

Many countries strive to attract foreign direct investment (FDI) in the hope that knowledge brought by multinationals will spill over to domestic industries and increase their productivity. In contrast with earlier literature th at failed to find positive intra-industry spillovers from FDI, this study focuses on effects operating across industries. The analysis, based on a firm-level panel data set from

Beata Smarzynska Javorcik

405

Many countries strive to attract foreign direct investment (FDI) hoping that knowledge brought by multinationals will spill over to domestic industries and increase their productivity. In contrast with earlier literature that failed to find positive intraindustry spillovers from FDI, this study focuses on effects operating across industries. The analysis, based on firm-level data from Lithuania, produces evidence consistent with positive

Beata Smarzynska Javorcik

2004-01-01

406

Parallel digital forensics infrastructure.

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

2009-10-01

407

Adaptive Parallelism with Piranha

"Adaptive parallelism" refers to parallel computations on a dynamically changingset of processors: processors may join or withdraw from the computation as it proceeds.Networks of fast workstations are the most important setting for adaptive parallelism atpresent. Workstations at most sites are typically idle for significant fractions of the day,and those idle cycles may constitute in the aggregate a powerful computing resource.For

Nicholas Carriero; David Gelernter; David Kaminsky; Jeffery Westbrook

408

Circuit Optimization Using Efficient Parallel Pattern Search

the simulation of clock meshes is already very time consuming, tuning such networks under tight performance constraints is an even daunting task. Same is the case with the phase locked loop. Being composed of multiple individual analog blocks, it is an extremely...

Narasimhan, Srinath S.

2011-08-08

409

Parallelizing an Index Generator for Desktop Search

, smart phones, or similar. In its simplest form, it returns a list of files that contain a given], an effective process for paral- lel software design might emerge. To add to this growing set, we conducted

Paris-Sud XI, UniversitÃ© de

410

NASA Technical Reports Server (NTRS)

A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

1994-01-01

411

I investigate search models in which firms wish to employ multiple workers. I first focus on efficiency. One important approach to modeling frictional labor markets is competitive search equilibrium, in which workers direct ...

Hawkins, William Blake

2006-01-01

412

DPL : Data Parallel Library Manual Technical Report: UNC-TR93-064

Â1Â DPL : Data Parallel Library Manual Technical Report: UNC-TR93-064 Original Version: November 2 parallel architectures. The Data Parallel Library (DPL) directly supports Proteus by supplying a vital link in the transformational execution system. 1. DPL - Description and Requirements The Data Parallel Library is a collection

North Carolina at Chapel Hill, University of

413

The Paradyn parallel performance measurement tools

Abstract Paradyn is a performance,measurement,tool for parallel and distributed programs. Paradyn uses several novel technologies so that it scales to long running programs and large systems, and automates much of the search for performance,bottlenecks. Paradyn is based on a dynamic,notion of performance,instrumentation and measurement.,Application programs are placed into execution and then performance,instrumentation is inserted into the running programs,and modified,during execution.

Barton P. Miller; Mark D. Callaghan; Jonathan M. Cargille; Jeffrey K. Hollingsworth; R. Bruce Irvin; Karen L. Karavanic; Krishna Kunchithapadam; Tia Newhall

1994-01-01

414

Parallel Plate System for Collecting Data Used to Determine Viscosity

NASA Technical Reports Server (NTRS)

A parallel-plate system collects data used to determine viscosity. A first plate is coupled to a translator so that the first plate can be moved along a first direction. A second plate has a pendulum device coupled thereto such that the second plate is suspended above and parallel to the first plate. The pendulum device constrains movement of the second plate to a second direction that is aligned with the first direction and is substantially parallel thereto. A force measuring device is coupled to the second plate for measuring force along the second direction caused by movement of the second plate.

Kaukler, William (Inventor); Ethridge, Edwin C. (Inventor)

2013-01-01

415

Compositional C++: Compositional Parallel Programming

A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms;

K. Mani Chandy; Carl Kesselman

1992-01-01

416

Extracting task-level parallelism

Automatic detection of task-level parallelism (also referred to as functional, DAG, unstructured, or thread parallelism) at various levels of program granularity is becoming increasingly important for parallelizing and back-end compilers. Parallelizing compilers detect iteration-level or coarser granularity parallelism which is suitable for parallel computers; detection of parallelism at the statement-or operation-level is essential for most modern microprocessors, including superscalar and

Milind Girkar; Constantine D. Polychronopoulos

1995-01-01

417

Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel

Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel the Fastest Fourier Transform February 2014 Introduction to Supercomputing (MCS 572) Parallel FFT & Isoefficiency L-14 14 February 2014 1 / 25 #12;Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel the Fastest Fourier

Verschelde, Jan

418

CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

Weening, J.S.

1988-05-01

419

NSDL National Science Digital Library

This is an online course for parallel programming. Topics include MPI basics, point-to-point communication, derived datatypes, virtual topologies, collective communication, parallel I/O, and performance analysis and profiling. Other languages will be discussed such as OpenMP and High Performance Fortran (HPF). A Computational Fluid Dynamics section includes flux functions, Riemann solver, Euler equations, and Navier-Stokes equations.

420

Grid Aware Parallelizing Algorithms

Running tightly coupled parallel MPI applications in a real grid environment using distributed MPI implementations (1, 2) can, in principle, make better and more fle xible use of computational resources, but for most parallel applications it has a major downside: The performance of such codes tends to be very poor. Most often the natural characteristics of realworld grids are responsible

Thomas Dramlitsch; Gabrielle Allen; Ed Seidel

421

Parallelizing quantum circuits

We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we

Anne Broadbent; Elham Kashefi

2009-01-01

422

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

Not Available

1991-10-23

423

A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb[superscript -1] of ?s = 7 TeV ...

Taylor, Frank E.

424

Code Parallelization with CAPO: A User Manual

NASA Technical Reports Server (NTRS)

A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

2001-01-01

425

Performance Evaluation in Network-Based Parallel Computing

NASA Technical Reports Server (NTRS)

Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

Dezhgosha, Kamyar

1996-01-01

426

Parallelism in integrated fluidic circuits

NASA Astrophysics Data System (ADS)

Many research groups around the world are working on integrated microfluidics. The goal of these projects is to automate and integrate the handling of liquid samples and reagents for measurement and assay procedures in chemistry and biology. Ultimately, it is hoped that this will lead to a revolution in chemical and biological procedures similar to that caused in electronics by the invention of the integrated circuit. The optimal size scale of channels for liquid flow is determined by basic constraints to be somewhere between 10 and 100 micrometers . In larger channels, mixing by diffusion takes too long; in smaller channels, the number of molecules present is so low it makes detection difficult. At Caliper, we are making fluidic systems in glass chips with channels in this size range, based on electroosmotic flow, and fluorescence detection. One application of this technology is rapid assays for drug screening, such as enzyme assays and binding assays. A further challenge in this area is to perform multiple functions on a chip in parallel, without a large increase in the number of inputs and outputs. A first step in this direction is a fluidic serial-to-parallel converter. Fluidic circuits will be shown with the ability to distribute an incoming serial sample stream to multiple parallel channels.

Bousse, Luc J.; Kopf-Sill, Anne R.; Parce, J. W.

1998-04-01

427

Mutual inhibition and capacity sharing during parallel preparation of serial eye movements.

Many common activities, like reading, scanning scenes, or searching for an inconspicuous item in a cluttered environment, entail serial movements of the eyes that shift the gaze from one object to another. Previous studies have shown that the primate brain is capable of programming sequential saccadic eye movements in parallel. Given that the onset of saccades directed to a target are unpredictable in individual trials, what prevents a saccade during parallel programming from being executed in the direction of the second target before execution of another saccade in the direction of the first target remains unclear. Using a computational model, here we demonstrate that sequential saccades inhibit each other and share the brain's limited processing resources (capacity) so that the planning of a saccade in the direction of the first target always finishes first. In this framework, the latency of a saccade increases linearly with the fraction of capacity allocated to the other saccade in the sequence, and exponentially with the duration of capacity sharing. Our study establishes a link between the dual-task paradigm and the ramp-to-threshold model of response time to identify a physiologically viable mechanism that preserves the serial order of saccades without compromising the speed of performance. PMID:22434620

Ray, Supriya; Bhutani, Neha; Murthy, Aditya

2012-01-01

428

Bilingual parallel programming

Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

Foster, I.; Overbeek, R.

1990-01-01

429

The lack of an adequate therapy for Alzheimer's Disease (AD) contributes greatly to the continuous growing amount of papers and reviews, reflecting the important efforts made by scientists in this field. It is well known that AD is the most common cause of dementia, and up-to-date there is no prevention therapy and no cure for the disease, which contrasts with the enormous efforts put on the task. On the other hand many aspects of AD are currently debated or even unknown. This review offers a view of the current state of knowledge about AD which includes more relevant findings and processes that take part in the disease; it also shows more relevant past, present and future research on therapeutic drugs taking into account the new paradigm "Multi-Target-Directed Ligands" (MTDLs). In our opinion, this paradigm will lead from now on the research toward the discovery of better therapeutic solutions, not only in the case of AD but also in other complex diseases. This review highlights the strategies followed by now, and focuses other emerging targets that should be taken into account for the future development of new MTDLs. Thus, the path followed in this review goes from the pathology and the processes involved in AD to the strategies to consider in on-going and future researches. PMID:24533013

Agis-Torres, Angel; Sölhuber, Monica; Fernandez, Maria; Sanchez-Montero, J M

2014-01-01

430

The lack of an adequate therapy for Alzheimer's Disease (AD) contributes greatly to the continuous growing amount of papers and reviews, reflecting the important efforts made by scientists in this field. It is well known that AD is the most common cause of dementia, and up-to-date there is no prevention therapy and no cure for the disease, which contrasts with the enormous efforts put on the task. On the other hand many aspects of AD are currently debated or even unknown. This review offers a view of the current state of knowledge about AD which includes more relevant findings and processes that take part in the disease; it also shows more relevant past, present and future research on therapeutic drugs taking into account the new paradigm “Multi-Target-Directed Ligands” (MTDLs). In our opinion, this paradigm will lead from now on the research toward the discovery of better therapeutic solutions, not only in the case of AD but also in other complex diseases. This review highlights the strategies followed by now, and focuses other emerging targets that should be taken into account for the future development of new MTDLs. Thus, the path followed in this review goes from the pathology and the processes involved in AD to the strategies to consider in on-going and future researches. PMID:24533013

Agis-Torres, Angel; Sölhuber, Monica; Fernandez, Maria; Sanchez-Montero, J.M.

2014-01-01

431

Simulating Billion-Task Parallel Programs

In simulating large parallel systems, bottom-up approaches exercise detailed hardware models with effects from simplified software models or traces, whereas top-down approaches evaluate the timing and functionality of detailed software models over coarse hardware models. Here, we focus on the top-down approach and significantly advance the scale of the simulated parallel programs. Via the direct execution technique combined with parallel discrete event simulation, we stretch the limits of the top-down approach by simulating message passing interface (MPI) programs with millions of tasks. Using a timing-validated benchmark application, a proof-of-concept scaling level is achieved to over 0.22 billion virtual MPI processes on 216,000 cores of a Cray XT5 supercomputer, representing one of the largest direct execution simulations to date, combined with a multiplexing ratio of 1024 simulated tasks per real task.

Perumalla, Kalyan S [ORNL] [ORNL; Park, Alfred J [ORNL] [ORNL

2014-01-01

432

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

Foster, I.; Tuecke, S.

1991-12-01

433

NASA Astrophysics Data System (ADS)

Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

2014-10-01

434

NSDL National Science Digital Library

It is important for students to understand how resistors, capacitors, and batteries combine in series and parallel. The combination of batteries has a lot of practical applications in science competitions. This lab also reinforces how to use a voltmeter t

Michael Horton

2009-05-30

435

Methodological Approach Parallel Computation

and intensity (hurricanes, tornados, storm surges, etc.) are of great general interest. Chris Paciorek Parallel (levels) A 100-year flood is the size of flood expected to occur once every 100 years on average, also

Paciorek, Chris

436

MINARET: Towards a time-dependent neutron transport parallel solver

NASA Astrophysics Data System (ADS)

We present the newly developed time-dependent 3D multigroup discrete ordinates neutron transport solver that has recently been implemented in the MINARET code. The solver is the support for a study about computing acceleration techniques that involve parallel architectures. In this work, we will focus on the parallelization of two of the variables involved in our equation: the angular directions and the time. This last variable has been parallelized by a (time) domain decomposition method called the parareal in time algorithm.

Baudron, A.-M.; Lautard, J. J.; Maday, Y.; Mula, O.

2014-06-01

437

Parallel transistor level circuit simulation using domain decomposition methods

This paper presents an efficient parallel transistor level full-chip circuit simulation tool with SPICE-accuracy. The new approach partitions the circuit into a linear domain and several non-linear domains based on circuit non-linearity and connectivity. The linear domain is solved by parallel fast linear solver while nonlinear domains are parallelly distributed into different processors and solved by direct solver. Parallel domain

He Peng; Chung-kuan Cheng

2009-01-01

438

NASA Technical Reports Server (NTRS)

A direct current transformer in which the primary consists of an elongated strip of superconductive material, across the ends of which is direct current potential is described. Parallel and closely spaced to the primary is positioned a transformer secondary consisting of a thin strip of magnetoresistive material.

Khanna, S. M.; Urban, E. W. (inventors)

1979-01-01

439

Scalable parallel communications

NASA Technical Reports Server (NTRS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-01-01

440

Scalable parallel communications

NASA Astrophysics Data System (ADS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-06-01

441

Control of parallel manipulators using force feedback

NASA Technical Reports Server (NTRS)

Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.

Nanua, Prabjot

1994-01-01

442

Parallelization: Binary Tree Traversal

NSDL National Science Digital Library

This module teaches the use of binary trees to sort through large data sets, different traversal methods for binary trees, including parallel methods, and how to scale a binary tree traversal on multiple compute cores. Upon completion of this module, students should be able to recognize the structure of a binary tree, employ different methods for traversing a binary tree, understand how to parallelize a binary tree traversal, and how to scale a binary tree traversal over multiple compute cores.

Aaron Weeden

443

Artificial intelligence in parallel

The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

Waldrop, M.M.

1984-08-10

444

A Parallel VLSI Circuit Layout Methodology

We propose a parallel computation layout technique that solves the layout problem directly rather than decomposing it into the traditional distinct steps of placement and routing. The method combines a superior geometric partitioning algorithm with extensive use of pre-computed minimum-length Steiner trees to produce layouts.

S. Bapat; James P. Cohoon

1993-01-01

445

NASA Astrophysics Data System (ADS)

A search is presented for direct top-squark pair production in final states with two leptons (electrons or muons) of opposite charge using 20.3 fb-1 of pp collision data at = 8 TeV, collected by the ATLAS experiment at the Large Hadron Collider in 2012. No excess over the Standard Model expectation is found. The results are interpreted under the separate assumptions (i) that the top squark decays to a b-quar