#### Sample records for parallel direct search

1. Multi-directional search: A direct search algorithm for parallel machines

SciTech Connect

Torczon, V.J.

1989-01-01

In recent years there has been a great deal in the development of optimization algorithms which exploit the computational power of parallel computer architectures. The author has developed a new direct search algorithm, which he calls multi-directional search, that is ideally suited for parallel computation. His algorithm belongs to the class of direct search methods, a class of optimization algorithms which neither compute nor approximate any derivatives of the objective function. His work, in fact, was inspired by the simplex method of Spendley, Hext, and Himsworth, and the simplex method of Nelder and Mead. The multi-directional search algorithm is inherently parallel. The basic idea of the algorithm is to perform concurrent searches in multiple directions. These searches are free of any interdependencies, so the information required can be computed in parallel. A central result of his work is the convergence analysis for his algorithm. By requiring only that the function be continuously differentiable over a bounded level set, he can prove that a subsequence of the points generated by the multi-directional search algorithm converges to a stationary point of the objective function. This is of great interest since he knows of few convergence results for practical direct search algorithms. He also presents numerical results indicating that the multidirectional search algorithm is robust, even in the presence of noise. His results include comparisons with the Nelder-Mead simplex algorithm, the method of steepest descent, and a quasi-Newton method. One surprising conclusion of his numerical tests is that the Nelder-Mead simplex algorithm is not robust. He closes with some comments about future directions of research.

2. A two-level parallel direct search implementation for arbitrarily sized objective functions

SciTech Connect

Hutchinson, S.A.; Shadid, N.; Moffat, H.K.

1994-12-31

In the past, many optimization schemes for massively parallel computers have attempted to achieve parallel efficiency using one of two methods. In the case of large and expensive objective function calculations, the optimization itself may be run in serial and the objective function calculations parallelized. In contrast, if the objective function calculations are relatively inexpensive and can be performed on a single processor, then the actual optimization routine itself may be parallelized. In this paper, a scheme based upon the Parallel Direct Search (PDS) technique is presented which allows the objective function calculations to be done on an arbitrarily large number (p{sub 2}) of processors. If, p, the number of processors available, is greater than or equal to 2p{sub 2} then the optimization may be parallelized as well. This allows for efficient use of computational resources since the objective function calculations can be performed on the number of processors that allow for peak parallel efficiency and then further speedup may be achieved by parallelizing the optimization. Results are presented for an optimization problem which involves the solution of a PDE using a finite-element algorithm as part of the objective function calculation. The optimum number of processors for the finite-element calculations is less than p/2. Thus, the PDS method is also parallelized. Performance comparisons are given for a nCUBE 2 implementation.

3. Parallel Processing in Visual Search Asymmetry

ERIC Educational Resources Information Center

Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

2004-01-01

The difficulty of visual search may depend on assignment of the same visual elements as targets and distractors-search asymmetry. Easy C-in-O searches and difficult O-in-C searches are often associated with parallel and serial search, respectively. Here, the time course of visual search was measured for both tasks with speed-accuracy methods. The…

4. Hybrid Optimization Parallel Search PACKage

Energy Science and Technology Software Center (ESTSC)

2009-11-10

HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

5. Efficiency of parallel direct optimization

NASA Technical Reports Server (NTRS)

Janies, D. A.; Wheeler, W. C.

2001-01-01

Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

6. Efficiency of parallel direct optimization.

PubMed

Janies, D A; Wheeler, W C

2001-03-01

Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. PMID:12240679

7. Design and implementation of a massively parallel version of DIRECT

SciTech Connect

He, J.; Verstak, A.; Watson, L.; Sosonkina, M.

2007-10-24

This paper describes several massively parallel implementations for a global search algorithm DIRECT. Two parallel schemes take different approaches to address DIRECT's design challenges imposed by memory requirements and data dependency. Three design aspects in topology, data structures, and task allocation are compared in detail. The goal is to analytically investigate the strengths and weaknesses of these parallel schemes, identify several key sources of inefficiency, and experimentally evaluate a number of improvements in the latest parallel DIRECT implementation. The performance studies demonstrate improved data structure efficiency and load balancing on a 2200 processor cluster.

8. Single-agent parallel window search

NASA Technical Reports Server (NTRS)

Powley, Curt; Korf, Richard E.

1991-01-01

Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

9. Asynchronous parallel pattern search for nonlinear optimization

SciTech Connect

P. D. Hough; T. G. Kolda; V. J. Torczon

2000-01-01

Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

10. HOPSPACK: Hybrid Optimization Parallel Search Package.

SciTech Connect

Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

2008-12-01

In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

11. A parallel algorithm for random searches

Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

2015-11-01

We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

12. Parallelized direct execution simulation of message-passing parallel programs

NASA Technical Reports Server (NTRS)

Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

1994-01-01

As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

13. Parallelizing alternating direction implicit solver on GPUs

Technology Transfer Automated Retrieval System (TEKTRAN)

We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

14. Theory and practice of parallel direct optimization.

PubMed

Janies, Daniel A; Wheeler, Ward C

2002-01-01

Our ability to collect and distribute genomic and other biological data is growing at a staggering rate (Pagel, 1999). However, the synthesis of these data into knowledge of evolution is incomplete. Phylogenetic systematics provides a unifying intellectual approach to understanding evolution but presents formidable computational challenges. A fundamental goal of systematics, the generation of evolutionary trees, is typically approached as two distinct NP-complete problems: multiple sequence alignment and phylogenetic tree search. The number of cells in a multiple alignment matrix are exponentially related to sequence length. In addition, the number of evolutionary trees expands combinatorially with respect to the number of organisms or sequences to be examined. Biologically interesting datasets are currently comprised of hundreds of taxa and thousands of nucleotides and morphological characters. This standard will continue to grow with the advent of highly automated sequencing and development of character databases. Three areas of innovation are changing how evolutionary computation can be addressed: (1) novel concepts for determination of sequence homology, (2) heuristics and shortcuts in tree-search algorithms, and (3) parallel computing. In this paper and the online software documentation we describe the basic usage of parallel direct optimization as implemented in the software POY (ftp://ftp.amnh.org/pub/molecular/poy). PMID:11924490

15. Parallel Mechanisms for Visual Search in Zebrafish

PubMed Central

Proulx, Michael J.; Parker, Matthew O.; Tahir, Yasser; Brennan, Caroline H.

2014-01-01

Parallel visual search mechanisms have been reported previously only in mammals and birds, and not animals lacking an expanded telencephalon such as bees. Here we report the first evidence for parallel visual search in fish using a choice task where the fish had to find a target amongst an increasing number of distractors. Following two-choice discrimination training, zebrafish were presented with the original stimulus within an increasing array of distractor stimuli. We found that zebrafish exhibit no significant change in accuracy and approach latency as the number of distractors increased, providing evidence of parallel processing. This evidence challenges theories of vertebrate neural architecture and the importance of an expanded telencephalon for the evolution of executive function. PMID:25353168

16. Parallel search of strongly ordered game trees

SciTech Connect

Marsland, T.A.; Campbell, M.

1982-12-01

The alpha-beta algorithm forms the basis of many programs that search game trees. A number of methods have been designed to improve the utility of the sequential version of this algorithm, especially for use in game-playing programs. These enhancements are based on the observation that alpha beta is most effective when the best move in each position is considered early in the search. Trees that have this so-called strong ordering property are not only of practical importance but possess characteristics that can be exploited in both sequential and parallel environments. This paper draws upon experiences gained during the development of programs which search chess game trees. Over the past decade major enhancements of the alpha beta algorithm have been developed by people building game-playing programs, and many of these methods will be surveyed and compared here. The balance of the paper contains a study of contemporary methods for searching chess game trees in parallel, using an arbitrary number of independent processors. To make efficient use of these processors, one must have a clear understanding of the basic properties of the trees actually traversed when alpha-beta cutoffs occur. This paper provides such insights and concludes with a brief description of a refinement to a standard parallel search algorithm for this problem. 33 references.

17. A Parallel VLSI Direction Finding Algorithm

van der Veen, Alle-Jan; Deprettere, Ed F.

1988-02-01

In this paper, we present a parallel VLSI architecture that is matched to a class of direction (frequency, pole) finding algorithms of type ESPRIT. The problem is modeled in such a way that it allows an easy to partition full parallel VLSI implementation, using unitary transformations only. The hard problem, the generalized Schur decomposition of a matrix pencil, is tackled using a modified Stewart Jacobi approach that improves convergence and simplifies parameter computations. The proposed architecture is a fixed size, 2-layer Jacobi iteration array that is matched to all sub-problems of the main problem: 2 QR-factorizations, 2 SVD's and a single GSD-problem. The arithmetic used is (pipelined) Cordic.

18. Massively Parallel Direct Simulation of Multiphase Flow

SciTech Connect

COOK,BENJAMIN K.; PREECE,DALE S.; WILLIAMS,J.R.

2000-08-10

The authors understanding of multiphase physics and the associated predictive capability for multi-phase systems are severely limited by current continuum modeling methods and experimental approaches. This research will deliver an unprecedented modeling capability to directly simulate three-dimensional multi-phase systems at the particle-scale. The model solves the fully coupled equations of motion governing the fluid phase and the individual particles comprising the solid phase using a newly discovered, highly efficient coupled numerical method based on the discrete-element method and the Lattice-Boltzmann method. A massively parallel implementation will enable the solution of large, physically realistic systems.

19. Parallel and Serial Processes in Visual Search

ERIC Educational Resources Information Center

Thornton, Thomas L.; Gilden, David L.

2007-01-01

A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

20. Learning and Parallelization Boost Constraint Search

ERIC Educational Resources Information Center

Yun, Xi

2013-01-01

Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

1. Direct search for dark matter

SciTech Connect

Yoo, Jonghee; /Fermilab

2009-12-01

Dark matter is hypothetical matter which does not interact with electromagnetic radiation. The existence of dark matter is only inferred from gravitational effects of astrophysical observations to explain the missing mass component of the Universe. Weakly Interacting Massive Particles are currently the most popular candidate to explain the missing mass component. I review the current status of experimental searches of dark matter through direct detection using terrestrial detectors.

2. An Analysis of Performance and Cost Factors in Searching Large Text Databases Using Parallel Search Systems.

ERIC Educational Resources Information Center

Couvreur, T. R.; And Others

1994-01-01

Discusses the results of modeling the performance of searching large text databases via various parallel hardware architectures and search algorithms. The performance under load and the cost of each configuration are compared, and a common search workload used in the modeling is described. (Contains 26 references.) (LRW)

3. Multi-directional local search

PubMed Central

Tricoire, Fabien

2012-01-01

This paper introduces multi-directional local search, a metaheuristic for multi-objective optimization. We first motivate the method and present an algorithmic framework for it. We then apply it to several known multi-objective problems such as the multi-objective multi-dimensional knapsack problem, the bi-objective set packing problem and the bi-objective orienteering problem. Experimental results show that our method systematically provides solution sets of comparable quality with state-of-the-art methods applied to benchmark instances of these problems, within reasonable CPU effort. We conclude that the proposed algorithmic framework is a viable option when solving multi-objective optimization problems. PMID:25140071

4. A parallelization of the row-searching algorithm

Yaici, Malika; Khaled, Hayet; Khaled, Zakia; Bentahar, Athmane

2012-11-01

The problem dealt in this paper concerns the parallelization of the row-searching algorithm which allows the search for linearly dependant rows on a given matrix and its implementation on MPI (Message Passing Interface) environment. This algorithm is largely used in control theory and more specifically in solving the famous diophantine equation. An introduction to the diophantine equation is presented, then two parallelization approaches of the algorithm are detailed. The first distributes a set of rows on processes (processors) and the second makes a distribution per blocks. The sequential algorithm and its two parallel forms are implemented using MPI routines, then modelled using UML (Unified Modelling Language) and finally evaluated using algorithmic complexity.

5. Self-Directed Job Search: An Introduction.

ERIC Educational Resources Information Center

Employment and Training Administration (DOL), Washington, DC.

This document provides an introduction to a job search training activity--self-directed job search--which can be implemented by Private Industry Councils (PICs) or Comprehensive Employment and Training Act (CETA) Prime Sponsors. The first section introduces self-directed job search for the economically disadvantaged. The next section describes…

6. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

NASA Technical Reports Server (NTRS)

Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

2000-01-01

The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

7. PLAST: parallel local alignment search tool for database comparison

PubMed Central

Nguyen, Van Hoa; Lavenier, Dominique

2009-01-01

Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set) and the multithreading concept (multicore). Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems. PMID:19821978

8. Parallel Harmony Search Based Distributed Energy Resource Optimization

SciTech Connect

Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

2015-01-01

This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

9. Optimised fine and coarse parallelism for sequence homology search.

PubMed

Meng, Xiandong; Chaudhary, Vipin

2006-01-01

New biological experimental techniques are continuing to generate large amounts of data using DNA, RNA, human genome and protein sequences. The quantity and quality of data from these experiments makes analyses of their results very time-consuming, expensive and impractical. Searching on DNA and protein databases using sequence comparison algorithms has become one of the most powerful techniques to better understand the functionality of particular DNA, RNA, genome, or protein sequence. This paper presents a technique to effectively combine fine and coarse grain parallelism using general-purpose processors for sequence homology database searches. The results show that the classic Smith-Waterman sequence alignment algorithm achieves super linear performance with proper scheduling and multi-level parallel computing at no additional cost. PMID:18048183

10. Parallel Breadth-First Search on Distributed Memory Systems

SciTech Connect

Computational Research Division; Buluc, Aydin; Madduri, Kamesh

2011-04-15

Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

11. Asynchronous parallel generating set search for linearly-constrained optimization.

SciTech Connect

Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson

2006-08-01

Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.

12. Parallel/distributed direct method for solving linear systems

NASA Technical Reports Server (NTRS)

Lin, Avi

1990-01-01

A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.

13. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

ERIC Educational Resources Information Center

Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

2010-01-01

Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

14. Series-parallel method of direct solar array regulation

NASA Technical Reports Server (NTRS)

Gooder, S. T.

1976-01-01

A 40 watt experimental solar array was directly regulated by shorting out appropriate combinations of series and parallel segments of a solar array. Regulation switches were employed to control the array at various set-point voltages between 25 and 40 volts. Regulation to within + or - 0.5 volt was obtained over a range of solar array temperatures and illumination levels as an active load was varied from open circuit to maximum available power. A fourfold reduction in regulation switch power dissipation was achieved with series-parallel regulation as compared to the usual series-only switching for direct solar array regulation.

15. Parallel Performance Optimization of the Direct Simulation Monte Carlo Method

Gao, Da; Zhang, Chonglin; Schwartzentruber, Thomas

2009-11-01

Although the direct simulation Monte Carlo (DSMC) particle method is more computationally intensive compared to continuum methods, it is accurate for conditions ranging from continuum to free-molecular, accurate in highly non-equilibrium flow regions, and holds potential for incorporating advanced molecular-based models for gas-phase and gas-surface interactions. As available computer resources continue their rapid growth, the DSMC method is continually being applied to increasingly complex flow problems. Although processor clock speed continues to increase, a trend of increasing multi-core-per-node parallel architectures is emerging. To effectively utilize such current and future parallel computing systems, a combined shared/distributed memory parallel implementation (using both Open Multi-Processing (OpenMP) and Message Passing Interface (MPI)) of the DSMC method is under development. The parallel implementation of a new state-of-the-art 3D DSMC code employing an embedded 3-level Cartesian mesh will be outlined. The presentation will focus on performance optimization strategies for DSMC, which includes, but is not limited to, modified algorithm designs, practical code-tuning techniques, and parallel performance optimization. Specifically, key issues important to the DSMC shared memory (OpenMP) parallel performance are identified as (1) granularity (2) load balancing (3) locality and (4) synchronization. Challenges and solutions associated with these issues as they pertain to the DSMC method will be discussed.

16. Optimal directed searches for continuous gravitational waves

Ming, Jing; Krishnan, Badri; Papa, Maria Alessandra; Aulbert, Carsten; Fehrmann, Henning

2016-03-01

Wide parameter space searches for long-lived continuous gravitational wave signals are computationally limited. It is therefore critically important that the available computational resources are used rationally. In this paper we consider directed searches, i.e., targets for which the sky position is known accurately but the frequency and spin-down parameters are completely unknown. Given a list of such potential astrophysical targets, we therefore need to prioritize. On which target(s) should we spend scarce computing resources? What parameter space region in frequency and spin-down should we search through? Finally, what is the optimal search setup that we should use? In this paper we present a general framework that allows us to solve all three of these problems. This framework is based on maximizing the probability of making a detection subject to a constraint on the maximum available computational cost. We illustrate the method for a simplified problem.

17. Interpreting Ellenore Flood's Self-Directed Search.

ERIC Educational Resources Information Center

Rayman, Jack R.

1998-01-01

Presents and responds to questions the author would ask himself before meeting with a client whose Self-Directed Search he has reviewed. The client in the case is a 29-year-old female high school teacher faced with four occupational opportunities from which she is trying to make a choice. (MKA)

18. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

SciTech Connect

Renaut, R.; He, Q.

1994-12-31

In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

19. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

DOEpatents

Blocksome, Michael A.; Mamidala, Amith R.

2013-09-03

Fencing direct memory access (DMA) data transfers in a parallel active messaging interface (PAMI) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

20. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

DOEpatents

Blocksome, Michael A; Mamidala, Amith R

2014-02-11

Fencing direct memory access (DMA) data transfers in a parallel active messaging interface (PAMI) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

1. Panel on future directions in parallel computer architecture

SciTech Connect

VanTilborg, A.M. )

1989-06-01

One of the program highlights of the 15th Annual International Symposium on Computer Architecture, held May 30 - June 2, 1988 in Honolulu, was a panel session on future directions in parallel computer architecture. The panel was organized and chaired by the author, and was comprised of Prof. Jack Dennis (NASA Ames Research Institute for Advanced Computer Science), Prof. H.T. Kung (Carnegie Mellon), and Dr. Burton Smith (Tera Computer Company). The objective of the panel was to identify the likely trajectory of future parallel computer system progress, particularly from the sandpoint of marketplace acceptance. Approximately 250 attendees participated in the session, in which each panelist began with a ten minute viewgraph explanation of his views, followed by an open and sometimes lively exchange with the audience and fellow panelists. The session ran for ninety minutes.

2. The LUX direct dark matter search

Murphy, A. St. J.

2016-06-01

As evidenced by the numerous contributions on the topic at this meeting, the IX International Conference on Interconnections between Particle Physics and Cosmology (PPC2015), the direct detection of dark matter remains as one of the highest priorities in both particle physics and cosmology. In 2013 the LUX direct dark matter search collaboration reported the most stringent constraints to-date on the spin-independent WIMP-nucleon interaction cross section. Here we present a summary of that work, describe recent technical improvements, and results from new calibrations. Prospects for the future of the LUX scientific program are reported, together with the outlook for its successor project, LZ.

3. Armentum: a hybrid direct search optimization methodology

Briones, Francisco Zorrilla

2016-07-01

Design of experiments (DOE) offers a great deal of benefits to any manufacturing organization, such as characterization of variables and sets the path for the optimization of the levels of these variables (settings) trough the Response surface methodology, leading to process capability improvement, efficiency increase, cost reduction. Unfortunately, the use of these methodologies is very limited due to various situations. Some of these situations involve the investment on production time, materials, personnel, equipment; most of organizations are not willing to invest in these resources or are not capable because of production demands, besides the fact that they will produce non-conformant product (scrap) during the process of experimentation. Other methodologies, in the form of algorithms, may be used to optimize a process. Known as direct search methods, these algorithms search for an optimum on an unknown function, trough the search of the best combination of the levels on the variables considered in the analysis. These methods have a very different application strategy, they search on the best combination of parameters, during the normal production run, calculating the change in the input variables and evaluating the results in small steps until an optimum is reached. These algorithms are very sensible to internal noise (variation of the input variables), among other disadvantages. In this paper it is made a comparison between the classical experimental design and one of these direct search methods, developed by Nelder and Mead (1965), known as the Nelder Mead simplex (NMS), trying to overcome the disadvantages and maximize the advantages of both approaches, trough a proposed combination of the two methodologies.

4. Improving Data Transfer Throughput with Direct Search Optimization

SciTech Connect

Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar; Kumaran, Kalyan; Foster, Ian

2016-01-01

Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamically adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.

5. Direct drive digital servo press with high parallel control

Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi

2013-12-01

Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than μm order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.

6. Nonlinearly-constrained optimization using asynchronous parallel generating set search.

SciTech Connect

Griffin, Joshua D.; Kolda, Tamara Gibson

2007-05-01

Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.

7. Indirect and direct search for dark matter

Klasen, M.; Pohl, M.; Sigl, G.

2015-11-01

The majority of the matter in the universe is still unidentified and under investigation by both direct and indirect means. Many experiments searching for the recoil of dark-matter particles off target nuclei in underground laboratories have established increasingly strong constraints on the mass and scattering cross sections of weakly interacting particles, and some have even seen hints at a possible signal. Other experiments search for a possible mixing of photons with light scalar or pseudo-scalar particles that could also constitute dark matter. Furthermore, annihilation or decay of dark matter can contribute to charged cosmic rays, photons at all energies, and neutrinos. Many existing and future ground-based and satellite experiments are sensitive to such signals. Finally, data from the Large Hadron Collider at CERN are scrutinized for missing energy as a signature of new weakly interacting particles that may be related to dark matter. In this review article we summarize the status of the field with an emphasis on the complementarity between direct detection in dedicated laboratory experiments, indirect detection in the cosmic radiation, and searches at particle accelerators.

8. Dark matter: an overview of direct searches.

Gerbier, G.

1991-11-01

The purpose of this paper is to give a flavour of the experimental challenges raised by the detection of dark matter. It summarizes the detection methods of the MACHO's, celestial bodies candidate for the baryonic dark matter and of the WIMP's, particles candidate for the non-baryonic dark matter. Current status and hopes are given. Two side aspects not directly related to the experimental search will be evoked to illustrate that the dark matter puzzle is indeed at the common frontier of various fields of physics.

9. EDELWEISS experiment: Direct search for dark matter

SciTech Connect

Lubashevskiy, A. V. Yakushev, E. A.

2008-07-15

The EDELWEISS experiment is aimed at direct searches for nonbaryonic cold dark matter by means of cryogenic germanium detectors. It is deployed at the LSM underground laboratory in the Frejus tunnel, which connects France and Italy. The results of the experimentmade it possible to set a limit on the spin-independent cross section for the scattering of weak-interacting massive particles (WIMP) at a level of 10{sup -6} pb. Data from 21 detectors of total mass about 7 kg are being accumulated at the present time.

10. Dark Matter: Collider vs. direct searches

Jacques, T.

2016-07-01

Effective Field Theories (EFTs) are a useful tool across a wide range of DM searches, including LHC searches and direct detection. Given the current lack of indications about the nature of the DM particle and its interactions, a model independent interpretation of the collider bounds appears mandatory, especially in complementarity with the reinterpretation of the exclusion limits within a choice of simplified models, which cannot exhaust the set of possible completions of an effective Lagrangian. However EFTs must be used with caution at LHC energies, where the energy scale of the interaction is at a scale where the EFT approximation can no longer be assumed to be valid. Here we introduce some tools that allow the validity of the EFT approximation to be quantified, and provide case studies for two operators. We also show a technique that allows EFT constraints from collider searches to be made substantially more robust, even at large center-of-mass energies. This allows EFT constraints from different classes of experiment to be compared in a much more robust manner.

11. A directed search for extraterrestrial laser signals

NASA Technical Reports Server (NTRS)

Betz, A.

1991-01-01

The focus of NASA's Search for Extraterrestrial Intelligence (SETI) Program is on microwave frequencies, where receivers have the best sensitivities for the detection of narrowband signals. Such receivers, when coupled to existing radio telescopes, form an optimal system for broad area searches over the sky. For a directed search, however, such as toward specific stars, calculations show that infrared wavelengths can be equally as effective as radio wavelengths for establishing an interstellar communication link. This is true because infrared telescopes have higher directivities (gains) that effectively compensate for the lower sensitivities of infrared receivers. The result is that, for a given level of transmitted power, the signal to noise ratio for communications is equally as good at infrared and radio wavelengths. It should also be noted that the overall sensitivities of both receiver systems are quite close to their respective fundamental limits: background thermal noise for the radio frequency system and quantum noise for the infrared receiver. Consequently, the choice of an optimum communication frequency may well be determined more by the achievable power levels of transmitters rather than the ultimate sensitivities of receivers at any specific frequency. In the infrared, CO2 laser transmitters with power levels greater than 1 MW can already be built on Earth. For a slightly more advanced civilization, a similar but enormously more powerful laser may be possible using a planetary atmosphere rich in CO2. Because of these possibilities and our own ignorance of what is really the optimum search frequency, a search for narrowband signals at infrared frequencies should be a part of a balanced SETI Program. Detection of narrowband infrared signals is best done with a heterodyne receiver functionally identical to a microwave spectral line receiver. We have built such a receiver for the detection of CO2 laser radiation at wavelengths near 10 microns. The

12. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

NASA Technical Reports Server (NTRS)

Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

1994-01-01

The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

13. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

ERIC Educational Resources Information Center

Sung, Kyongje

2008-01-01

Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

14. Direct Dark Matter search with XENON100

Orrigo, S. E. A.

2016-07-01

The XENON100 experiment is the second phase of the XENON program for the direct detection of the dark matter in the universe. The XENON100 detector is a two-phase Time Projection Chamber filled with 161 kg of ultra pure liquid xenon. The results from 224.6 live days of dark matter search with XENON100 are presented. No evidence for dark matter in the form of WIMPs is found, excluding spin-independent WIMP-nucleon scattering cross sections above 2 × 10-45 cm2 for a 55 GeV/c2 WIMP at 90% confidence level (C.L.). The most stringent limit is established on the spin-dependent WIMP-neutron interaction for WIMP masses above 6 GeV/c2, with a minimum cross section of 3.5 × 10-40 cm2 (90% C.L.) for a 45 GeV/c2 WIMP. The same dataset is used to search for axions and axion-like-particles. The best limits to date are set on the axion-electron coupling constant for solar axions, gAe < 7.7 × 10-12 (90% C.L.), and for axion-like-particles, gAe < 1 × 10-12 (90% C.L.) for masses between 5 and 10 keV/c2.

15. GRAPES: A Software for Parallel Searching on Biological Graphs Targeting Multi-Core Architectures

PubMed Central

Bombieri, Nicola; Pulvirenti, Alfredo; Ferro, Alfredo; Shasha, Dennis

2013-01-01

Biological applications, from genomics to ecology, deal with graphs that represents the structure of interactions. Analyzing such data requires searching for subgraphs in collections of graphs. This task is computationally expensive. Even though multicore architectures, from commodity computers to more advanced symmetric multiprocessing (SMP), offer scalable computing power, currently published software implementations for indexing and graph matching are fundamentally sequential. As a consequence, such software implementations (i) do not fully exploit available parallel computing power and (ii) they do not scale with respect to the size of graphs in the database. We present GRAPES, software for parallel searching on databases of large biological graphs. GRAPES implements a parallel version of well-established graph searching algorithms, and introduces new strategies which naturally lead to a faster parallel searching system especially for large graphs. GRAPES decomposes graphs into subcomponents that can be efficiently searched in parallel. We show the performance of GRAPES on representative biological datasets containing antiviral chemical compounds, DNA, RNA, proteins, protein contact maps and protein interactions networks. PMID:24167551

16. GRAPES: a software for parallel searching on biological graphs targeting multi-core architectures.

PubMed

Giugno, Rosalba; Bonnici, Vincenzo; Bombieri, Nicola; Pulvirenti, Alfredo; Ferro, Alfredo; Shasha, Dennis

2013-01-01

Biological applications, from genomics to ecology, deal with graphs that represents the structure of interactions. Analyzing such data requires searching for subgraphs in collections of graphs. This task is computationally expensive. Even though multicore architectures, from commodity computers to more advanced symmetric multiprocessing (SMP), offer scalable computing power, currently published software implementations for indexing and graph matching are fundamentally sequential. As a consequence, such software implementations (i) do not fully exploit available parallel computing power and (ii) they do not scale with respect to the size of graphs in the database. We present GRAPES, software for parallel searching on databases of large biological graphs. GRAPES implements a parallel version of well-established graph searching algorithms, and introduces new strategies which naturally lead to a faster parallel searching system especially for large graphs. GRAPES decomposes graphs into subcomponents that can be efficiently searched in parallel. We show the performance of GRAPES on representative biological datasets containing antiviral chemical compounds, DNA, RNA, proteins, protein contact maps and protein interactions networks. PMID:24167551

17. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

NASA Technical Reports Server (NTRS)

Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

1994-01-01

Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

18. A proposed experimental search for chameleons using asymmetric parallel plates

Burrage, Clare; Copeland, Edmund J.; Stevenson, James A.

2016-08-01

Light scalar fields coupled to matter are a common consequence of theories of dark energy and attempts to solve the cosmological constant problem. The chameleon screening mechanism is commonly invoked in order to suppress the fifth forces mediated by these scalars, sufficiently to avoid current experimental constraints, without fine tuning. The force is suppressed dynamically by allowing the mass of the scalar to vary with the local density. Recently it has been shown that near future cold atoms experiments using atom-interferometry have the ability to access a large proportion of the chameleon parameter space. In this work we demonstrate how experiments utilising asymmetric parallel plates can push deeper into the remaining parameter space available to the chameleon.

19. When the Lowest Energy Does Not Induce Native Structures: Parallel Minimization of Multi-Energy Values by Hybridizing Searching Intelligences

PubMed Central

Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

2012-01-01

Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

20. Parallel graph search: application to intraretinal layer segmentation of 3D macular OCT scans

Lee, Kyungmoo; Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

2012-02-01

Image segmentation is of paramount importance for quantitative analysis of medical image data. Recently, a 3-D graph search method which can detect globally optimal interacting surfaces with respect to the cost function of volumetric images has been introduced, and its utility demonstrated in several application areas. Although the method provides excellent segmentation accuracy, its limitation is a slow processing speed when many surfaces are simultaneously segmented in large volumetric datasets. Here, we propose a novel method of parallel graph search, which overcomes the limitation and allows the quick detection of multiple surfaces. To demonstrate the obtained performance with respect to segmentation accuracy and processing speedup, the new approach was applied to retinal optical coherence tomography (OCT) image data and compared with the performance of the former non-parallel method. Our parallel graph search methods for single and double surface detection are approximately 267 and 181 times faster than the original graph search approach in 5 macular OCT volumes (200 x 5 x 1024 voxels) acquired from the right eyes of 5 normal subjects. The resulting segmentation differences were small as demonstrated by the mean unsigned differences between the non-parallel and parallel methods of 0.0 +/- 0.0 voxels (0.0 +/- 0.0 μm) and 0.27 +/- 0.34 voxels (0.53 +/- 0.66 μm) for the single- and dual-surface approaches, respectively.

1. Parallel simulations of Grover's algorithm for closest match search in neutron monitor data

Kussainov, Arman; White, Yelena

We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.

2. Parallel state-space search for a first solution with consistent linear speedups

SciTech Connect

Kale, L.V.; Saletore, V.A. )

1989-01-01

Consider the problem of exploring a large state-space for a goal state. Although many such states may exist in the state-space, finding any one state satisfying the requirements is sufficient. All the methods known until now for conducting such search in parallel using multiprocessors fail to provide consistent linear speedups over sequential execution. The speedups vary between sub-linear speedups over sequential execution. The speedup, giving rise to speedup anomalies reported in literature. The authors present a prioritizing strategy which yields consistent speedups that are close to P with P processors, and that monotonically increase with the addition of processors. It achieves this by keeping the total number of nodes expanded during parallel search very close to that in a sequential search. In addition, the strategy requires substantially smaller memory over other methods. The performance of this strategy is demonstrated on a multiprocessor with several state-space search problems.

3. Parallel direct numerical simulation of three-dimensional spray formation

Chergui, Jalel; Juric, Damir; Shin, Seungwon; Kahouadji, Lyes; Matar, Omar

2015-11-01

We present numerical results for the breakup mechanism of a liquid jet surrounded by a fast coaxial flow of air with density ratio (water/air) ~ 1000 and kinematic viscosity ratio ~ 60. We use code BLUE, a three-dimensional, two-phase, high performance, parallel numerical code based on a hybrid Front-Tracking/Level Set algorithm for Lagrangian tracking of arbitrarily deformable phase interfaces and a precise treatment of surface tension forces. The parallelization of the code is based on the technique of domain decomposition where the velocity field is solved by a parallel GMRes method for the viscous terms and the pressure by a parallel multigrid/GMRes method. Communication is handled by MPI message passing procedures. The interface method is also parallelized and defines the interface both by a discontinuous density field as well as by a triangular Lagrangian mesh and allows the interface to undergo large deformations including the rupture and/or coalescence of interfaces. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

4. Attentional Control via Parallel Target-Templates in Dual-Target Search

PubMed Central

Barrett, Doug J. K.; Zobay, Oliver

2014-01-01

Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793

5. Performance analysis of parallel branch and bound search with the hypercube architecture

NASA Technical Reports Server (NTRS)

Mraz, Richard T.

1987-01-01

With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

6. Architecture, implementation and parallelization of the software to search for periodic gravitational wave signals

Poghosyan, G.; Matta, S.; Streit, A.; Bejger, M.; Królak, A.

2015-03-01

The parallelization, design and scalability of the PolGrawAllSky code to search for periodic gravitational waves from rotating neutron stars is discussed. The code is based on an efficient implementation of the F-statistic using the Fast Fourier Transform algorithm. To perform an analysis of data from the advanced LIGO and Virgo gravitational wave detectors' network, which will start operating in 2015, hundreds of millions of CPU hours will be required-the code utilizing the potential of massively parallel supercomputers is therefore mandatory. We have parallelized the code using the Message Passing Interface standard, implemented a mechanism for combining the searches at different sky-positions and frequency bands into one extremely scalable program. The parallel I/O interface is used to escape bottlenecks, when writing the generated data into file system. This allowed to develop a highly scalable computation code, which would enable the data analysis at large scales on acceptable time scales. Benchmarking of the code on a Cray XE6 system was performed to show efficiency of our parallelization concept and to demonstrate scaling up to 50 thousand cores in parallel.

7. Target intersection probabilities for parallel-line and continuous-grid types of search

USGS Publications Warehouse

McCammon, R.B.

1977-01-01

The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an

8. Parallel database search and prime factorization with magnonic holographic memory devices

SciTech Connect

Khitun, Alexander

2015-12-28

In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

9. Parallel database search and prime factorization with magnonic holographic memory devices

Khitun, Alexander

2015-12-01

In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

10. Directional dark matter searches with carbon nanotubes

Capparelli, L. M.; Cavoto, G.; Mazzilli, D.; Polosa, A. D.

2015-09-01

A new solution to the problem of dark matter directional detection might come from the use of large arrays of aligned carbon nanotubes. We calculate the expected rate of carbon ions channeled in single-wall nanotubes once extracted by the scattering with a massive dark matter particle. Depending on its initial kinematic conditions, the ejected carbon ion may be channeled in the nanotube array or stop in the bulk. The orientation of the array with respect to the direction of motion of the Sun has an appreciable effect on the channeling probability. This provides the required anisotropic response for a directional detector.

11. Integrating structure- and ligand-based virtual screening: comparison of individual, parallel, and fused molecular docking and similarity search calculations on multiple targets.

PubMed

Tan, Lu; Geppert, Hanna; Sisay, Mihiret T; Gütschow, Michael; Bajorath, Jürgen

2008-10-01

Similarity searching is often used to preselect compounds for docking, thereby decreasing the size of screening databases. However, integrated structure- and ligand-based screening schemes are rare at present. Docking and similarity search calculations using 2D fingerprints were carried out in a comparative manner on nine target enzymes, for which significant numbers of diverse inhibitors could be obtained. In the absence of knowledge-based docking constraints and target-directed parameter optimisation, fingerprint searching displayed a clear preference over docking calculations. Alternative combinations of docking and similarity search results were investigated and found to further increase compound recall of individual methods in a number of instances. When the results of similarity searching and docking were combined, parallel selection of candidate compounds from individual rankings was generally superior to rank fusion. We suggest that complementary results from docking and similarity searching can be captured by integrated compound selection schemes. PMID:18651695

12. A Direct Search for Dirac Magnetic Monopoles

SciTech Connect

Mulhearn, Michael James

2004-10-01

Magnetic monopoles are highly ionizing and curve in the direction of the magnetic field. A new dedicated magnetic monopole trigger at CDF, which requires large light pulses in the scintillators of the time-of-flight system, remains highly efficient to monopoles while consuming a tiny fraction of the available trigger bandwidth. A specialized offline reconstruction checks the central drift chamber for large dE/dx tracks which do not curve in the plane perpendicular to the magnetic field. We observed zero monopole candidate events in 35.7 pb{sup -1} of proton-antiproton collisions at {radical}s = 1.96 TeV. This implies a monopole production cross section limit {sigma} < 0.2 pb for monopoles with mass between 100 and 700 GeV, and, for a Drell-Yan like pair production mechanism, a mass limit m > 360 GeV.

13. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

DOEpatents

Blocksome, Michael A.; Mamidala, Amith R.

2015-07-07

Fencing direct memory access (DMA) data transfers in a parallel active messaging interface (PAMI) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

14. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

DOEpatents

Blocksome, Michael A.; Mamidala, Amith R.

2015-07-14

Fencing direct memory access (DMA) data transfers in a parallel active messaging interface (PAMI) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

15. Simulating a Direction-Finder Search for an ELT

NASA Technical Reports Server (NTRS)

Bream, Bruce

2005-01-01

A computer program simulates the operation of direction-finding equipment engaged in a search for an emergency locator transmitter (ELT) aboard an aircraft that has crashed. The simulated equipment is patterned after the equipment used by the Civil Air Patrol to search for missing aircraft. The program is designed to be used for training in radio direction-finding and/or searching for missing aircraft without incurring the expense and risk of using real aircraft and ground search resources. The program places a hidden ELT on a map and enables the user to search for the location of the ELT by moving a 14 NASA Tech Briefs, March 2005 small aircraft image around the map while observing signal-strength and direction readings on a simulated direction- finding locator instrument. As the simulated aircraft is turned and moved on the map, the program updates the readings on the direction-finding instrument to reflect the current position and heading of the aircraft relative to the location of the ELT. The software is distributed in a zip file that contains an installation program. The software runs on the Microsoft Windows 9x, NT, and XP operating systems.

16. Direct Search for Low Mass Dark Matter Particles with CCDs

DOE PAGESBeta

Barreto, J.; Cease, H.; Diehl, H. T.; Estrada, J.; Flaugher, B.; Harrison, N.; Jones, J.; Kilminster, B.; Molina, J.; Smith, J.; et al

2012-05-15

A direct dark matter search is performed using fully-depleted high-resistivity CCD detectors. Due to their low electronic readout noise (RMS ~7 eV) these devices operate with a very low detection threshold of 40 eV, making the search for dark matter particles with low masses (~5 GeV) possible. The results of an engineering run performed in a shallow underground site are presented, demonstrating the potential of this technology in the low mass region.

17. Topology search of 3-DOF translational parallel manipulators with three identical limbs for leg mechanisms

Wang, Mingfeng; Ceccarelli, Marco

2015-07-01

Three-degree of freedom(3-DOF) translational parallel manipulators(TPMs) have been widely studied both in industry and academia in the past decades. However, most architectures of 3-DOF TPMs are created mainly on designers' intuition, empirical knowledge, or associative reasoning and the topology synthesis researches of 3-DOF TPMs are still limited. In order to find out the atlas of designs for 3-DOF TPMs, a topology search is presented for enumeration of 3-DOF TPMs whose limbs can be modeled as 5-DOF serial chains. The proposed topology search of 3-DOF TPMs is aimed to overcome the sensitivities of the design solution of a 3-DOF TPM for a LARM leg mechanism in a biped robot. The topology search, which is based on the concept of generation and specialization in graph theory, is reported as a step-by-step procedure with desired specifications, principle and rules of generalization, design requirements and constraints, and algorithm of number synthesis. In order to obtain new feasible designs for a chosen example and to limit the search domain under general considerations, one topological generalized kinematic chain is chosen to be specialized. An atlas of new feasible designs is obtained and analyzed for a specific solution as leg mechanisms. The proposed methodology provides a topology search for 3-DOF TPMs for leg mechanisms, but it can be also expanded for other applications and tasks.

18. The JCSG MR Pipeline: Optimized Alignments, Multiple Models And Parallel Searches

SciTech Connect

Schwarzenbacher, R.; Godzik, A.; Jaroszewski, L.

2009-05-27

The success rate of molecular replacement (MR) falls considerably when search models share less than 35% sequence identity with their templates, but can be improved significantly by using fold-recognition methods combined with exhaustive MR searches. Models based on alignments calculated with fold-recognition algorithms are more accurate than models based on conventional alignment methods such as FASTA or BLAST, which are still widely used for MR. In addition, by designing MR pipelines that integrate phasing and automated refinement and allow parallel processing of such calculations, one can effectively increase the success rate of MR. Here, updated results from the JCSG MR pipeline are presented, which to date has solved 33 MR structures with less than 35% sequence identity to the closest homologue of known structure. By using difficult MR problems as examples, it is demonstrated that successful MR phasing is possible even in cases where the similarity between the model and the template can only be detected with fold-recognition algorithms. In the first step, several search models are built based on all homologues found in the PDB by fold-recognition algorithms. The models resulting from this process are used in parallel MR searches with different combinations of input parameters of the MR phasing algorithm. The putative solutions are subjected to rigid-body and restrained crystallographic refinement and ranked based on the final values of free R factor, figure of merit and deviations from ideal geometry. Finally, crystal packing and electron-density maps are checked to identify the correct solution. If this procedure does not yield a solution with interpretable electron-density maps, then even more alternative models are prepared. The structurally variable regions of a protein family are identified based on alignments of sequences and known structures from that family and appropriate trimmings of the models are proposed. All combinations of these trimmings are

19. A hybrid dynamic harmony search algorithm for identical parallel machines scheduling

Chen, Jing; Pan, Quan-Ke; Wang, Ling; Li, Jun-Qing

2012-02-01

In this article, a dynamic harmony search (DHS) algorithm is proposed for the identical parallel machines scheduling problem with the objective to minimize makespan. First, an encoding scheme based on a list scheduling rule is developed to convert the continuous harmony vectors to discrete job assignments. Second, the whole harmony memory (HM) is divided into multiple small-sized sub-HMs, and each sub-HM performs evolution independently and exchanges information with others periodically by using a regrouping schedule. Third, a novel improvisation process is applied to generate a new harmony by making use of the information of harmony vectors in each sub-HM. Moreover, a local search strategy is presented and incorporated into the DHS algorithm to find promising solutions. Simulation results show that the hybrid DHS (DHS_LS) is very competitive in comparison to its competitors in terms of mean performance and average computational time.

20. Parallel implementation of 3D protein structure similarity searches using a GPU and the CUDA.

PubMed

Mrozek, Dariusz; Brożek, Miłosz; Małysiak-Mrozek, Bożena

2014-02-01

Searching for similar 3D protein structures is one of the primary processes employed in the field of structural bioinformatics. However, the computational complexity of this process means that it is constantly necessary to search for new methods that can perform such a process faster and more efficiently. Finding molecular substructures that complex protein structures have in common is still a challenging task, especially when entire databases containing tens or even hundreds of thousands of protein structures must be scanned. Graphics processing units (GPUs) and general purpose graphics processing units (GPGPUs) can perform many time-consuming and computationally demanding processes much more quickly than a classical CPU can. In this paper, we describe the GPU-based implementation of the CASSERT algorithm for 3D protein structure similarity searching. This algorithm is based on the two-phase alignment of protein structures when matching fragments of the compared proteins. The GPU (GeForce GTX 560Ti: 384 cores, 2GB RAM) implementation of CASSERT ("GPU-CASSERT") parallelizes both alignment phases and yields an average 180-fold increase in speed over its CPU-based, single-core implementation on an Intel Xeon E5620 (2.40GHz, 4 cores). In this paper, we show that massive parallelization of the 3D structure similarity search process on many-core GPU devices can reduce the execution time of the process, allowing it to be performed in real time. GPU-CASSERT is available at: http://zti.polsl.pl/dmrozek/science/gpucassert/cassert.htm. PMID:24481593

1. High-throughput mass-directed parallel purification incorporating a multiplexed single quadrupole mass spectrometer.

PubMed

Xu, Rongda; Wang, Tao; Isbell, John; Cai, Zhe; Sykes, Christopher; Brailsford, Andrew; Kassel, Daniel B

2002-07-01

We report on the development of a parallel HPLC/MS purification system incorporating an indexed (i.e., multiplexed) ion source. In the method described, each of the flow streams from a parallel array of HPLC columns is directed toward the multiplexed (MUX) ion source and sampled in a time-dependent, parallel manner. A visual basic application has been developed and monitors in real-time the extracted ion current from each sprayer channel. Mass-directed fraction collection is initiated into a parallel array of fraction collectors specific for each of the spray channels. In the first embodiment of this technique, we report on a four-column semipreparative parallel LC/MS system incorporating MUX detection. In this parallel LC/MS application (in which sample loads between 1 and 10 mg on-column are typically made), no cross talk was observed. Ion signals from each of the channels were found reproducible over 192 injections, with interchannel signal variations between 11 and 17%. The visual basic fraction collection application permits preset individual start collection and end collection thresholds for each channel, thereby compensating for the slight variation in signal between sprayers. By incorporating postfraction collector UV detection, we have been able to optimize the valve-triggering delay time with precut transfer tubing between the mass spectrometer and fraction collectors and achieve recoveries greater than 80%. Examples of the MUX-guided, mass-directed fraction purification of both standards and real library reaction mixtures are presented within. PMID:12141664

2. Direct searches for dark matter: Recent results

PubMed Central

Rosenberg, Leslie J.

1998-01-01

There is abundant evidence for large amounts of unseen matter in the universe. This dark matter, by its very nature, couples feebly to ordinary matter and is correspondingly difficult to detect. Nonetheless, several experiments are now underway with the sensitivity required to detect directly galactic halo dark matter through their interactions with matter and radiation. These experiments divide into two broad classes: searches for weakly interacting massive particles (WIMPs) and searches for axions. There exists a very strong theoretical bias for supposing that supersymmetry (SUSY) is a correct description of nature. WIMPs are predicted by this SUSY theory and have the required properties to be dark matter. These WIMPs are detected from the byproducts of their occasional recoil against nucleons. There are efforts around the world to detect these rare recoils. The WIMP part of this overview focuses on the cryogenic dark matter search (CDMS) underway in California. Axions, another favored dark matter candidate, are predicted to arise from a minimal extension of the standard model that explains the absence of the expected large CP violating effects in strong interactions. Axions can, in the presence of a large magnetic field, turn into microwave photons. It is the slight excess of photons above noise that signals the axion. Axion searches are underway in California and Japan. The axion part of this overview focuses on the California effort. Brevity does not allow me to discuss other WIMP and axion searches, likewise for accelerator and satellite based searches; I apologize for their omission. PMID:9419325

3. Internal bremsstrahlung signatures in light of direct dark matter searches

SciTech Connect

Garny, Mathias; Ibarra, Alejandro; Pato, Miguel; Vogl, Stefan E-mail: ibarra@tum.de E-mail: stefan.vogl@tum.de

2013-12-01

Although proposed long ago, the search for internal bremsstrahlung signatures has only recently been made possible by the excellent energy resolution of ground-based and satellite-borne gamma-ray instruments. Here, we investigate thoroughly the current status of internal bremsstrahlung searches in light of the results of direct dark matter searches and in the framework of a minimal mass-degenerate scenario consisting of a Majorana dark matter particle that couples to a fermion and a scalar via a Yukawa coupling. The upper limits on the annihilation cross section set by Fermi-LAT and H.E.S.S. extend uninterrupted from tens of GeV up to tens of TeV and are rather insensitive to the mass degeneracy in the particle physics model. In contrast, direct searches are best in the moderate to low mass splitting regime, where XENON100 limits overshadow Fermi-LAT and H.E.S.S. up to TeV masses if dark matter couples to one of the light quarks. In our minimal scenario we examine carefully the prospects for GAMMA-400, CTA and XENON1T, all planned to come online in the near future, and find that: (a) CTA and XENON1T are fully complementary, with CTA most sensitive to multi-TeV masses and mass splittings around 10%, and XENON1T probing best small mass splittings up to TeV masses; and (b) current constraints from XENON100 already preclude the observation of any spectral feature with GAMMA-400 in spite of its impressive energy resolution, unless dark matter does not couple predominantly to light quarks. Finally, we point out that, unlike for direct searches, the possibility of detecting thermal relics in upcoming internal bremsstrahlung searches requires, depending on the concrete scenario, boost factors larger than 5–10.

4. An efficient parallel algorithm for O( N2) direct summation method and its variations on distributed-memory parallel machines

Makino, Junichiro

2002-10-01

We present a novel, highly efficient algorithm to parallelize O( N2) direct summation method for N-body problems with individual timesteps on distributed-memory parallel machines such as Beowulf clusters. Previously known algorithms, in which all processors have complete copies of the N-body system, has the serious problem that the communication-computation ratio increases as we increase the number of processors, since the communication cost is independent of the number of processors. In the new algorithm, p processors are organized as a p×p two-dimensional array. Each processor has N/ p particles, but the data are distributed in such a way that complete system is presented if we look at any row or column consisting of p processors. In this algorithm, the communication cost scales as N/ p, while the calculation cost scales as N2/ p. Thus, we can use a much larger number of processors without losing efficiency compared to what was practical with previously known algorithms.

5. Directed search for continuous gravitational waves from the Galactic center

Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barker, D.; Barnum, S. H.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Bergmann, G.; Berliner, J. M.; Bertolini, A.; Bessis, D.; Betzwieser, J.; Beyersdorf, P. T.; Bhadbhade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Bowers, J.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brannen, C. A.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Colombini, M.; Constancio, M., Jr.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Deleeuw, E.; Deléglise, S.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Dmitry, K.; Donovan, F.; Dooley, K. L.; Doravari, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edwards, M.; Effler, A.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farr, B.; Farr, W.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R.; Flaminio, R.; Foley, E.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B.; Hall, E.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Heefner, J.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Horrom, T.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hua, Z.; Huang, V.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Iafrate, J.; Ingram, D. R.

2013-11-01

We present the results of a directed search for continuous gravitational waves from unknown, isolated neutron stars in the Galactic center region, performed on two years of data from LIGO’s fifth science run from two LIGO detectors. The search uses a semicoherent approach, analyzing coherently 630 segments, each spanning 11.5 hours, and then incoherently combining the results of the single segments. It covers gravitational wave frequencies in a range from 78 to 496 Hz and a frequency-dependent range of first-order spindown values down to -7.86×10-8Hz/s at the highest frequency. No gravitational waves were detected. The 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic center are ˜3.35×10-25 for frequencies near 150 Hz. These upper limits are the most constraining to date for a large-parameter-space search for continuous gravitational wave signals.

6. Retrieval comparison of EndNote to search MEDLINE (Ovid and PubMed) versus searching them directly.

PubMed

Gall, Carole; Brahmi, Frances A

2004-01-01

Using EndNote version 7.0, the authors tested the search capabilities of the EndNote search engine for retrieving citations from MEDLINE for importation into EndNote, a citation management software package. Ovid MEDLINE and PubMed were selected for the comparison. Several searches were performed on Ovid MEDLINE and PubMed using EndNote as the search engine, and the same searches were run on both Ovid and PubMed directly. Findings indicate that it is preferable to search MEDLINE directly rather than using EndNote. The publishers of EndNote do warn its users about the limitations of their product as a search engine when searching external databases. In this article, the limitations of EndNote as a search engine for searching MEDLINE were explored as related to MeSH, non-MeSH, citation verification, and author searching. PMID:15364649

7. Future directions in searching for eta-mesic nuclei

Haider, Quamrul; Liu, Lon-Chang

2016-03-01

Future directions in searching for eta-mesic nuclei: Q. Haider, Department of Physics and Engineering Physics, Fordham University, Bronx, N.Y. 10458, U.S.A. and L.C. Liu, Theoretical Division, Los Alamos National Laboratory, Los Alamos, N.M 87545, U.S.A. Eta-mesic nucleus or the quasibound nuclear state of an eta (η) meson in a nucleus is caused by strong-interaction force alone. This new type of nuclear species, which extends the landscape of nuclear physics, has been extensively studied since its prediction in 1986. In experimental search for η-mesic nucleus, transfer reactions have been frequently employed. One such reaction has led to the observation of the η-mesic nucleus 25Mg η . However, searching quasibound η-nucleus states in lighter nuclei such as 3He, 4He, and 11B has not yet yielded positive results. Searching η-mesic nuclei in medium-mass nuclear systems other than 25Mg is highly valuable. In view of the aforementioned experimental results, we suggest searching for more η-mesic nuclei in target nuclei having a mass number A >= 12 . Bronx, N.Y. 10458.

8. Gravitational Waves: Search Results, Data Analysis and Parameter Estimation. Amaldi 10 Parallel Session C2

NASA Technical Reports Server (NTRS)

Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi

2015-01-01

The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

9. Scalable simulations for directed self-assembly patterning with the use of GPU parallel computing

Yoshimoto, Kenji; Peters, Brandon L.; Khaira, Gurdaman S.; de Pablo, Juan J.

2012-03-01

Directed self-assembly (DSA) patterning has been increasingly investigated as an alternative lithographic process for future technology nodes. One of the critical specs for DSA patterning is defects generated through annealing process or by roughness of pre-patterned structure. Due to their high sensitivity to the process and wafer conditions, however, characterization of those defects still remain challenging. DSA simulations can be a powerful tool to predict the formation of the DSA defects. In this work, we propose a new method to perform parallel computing of DSA Monte Carlo (MC) simulations. A consumer graphics card was used to access its hundreds of processing units for parallel computing. By partitioning the simulation system into non-interacting domains, we were able to run MC trial moves in parallel on multiple graphics-processing units (GPUs). Our results show a significant improvement in computational performance.

10. Versatile directional searches for gravitational waves with Pulsar Timing Arrays

Madison, D. R.; Zhu, X.-J.; Hobbs, G.; Coles, W.; Shannon, R. M.; Wang, J. B.; Tiburzi, C.; Manchester, R. N.; Bailes, M.; Bhat, N. D. R.; Burke-Spolaor, S.; Dai, S.; Dempsey, J.; Keith, M.; Kerr, M.; Lasky, P.; Levin, Y.; Osłowski, S.; Ravi, V.; Reardon, D.; Rosado, P.; Spiewak, R.; van Straten, W.; Toomey, L.; Wen, L.; You, X.

2016-02-01

By regularly monitoring the most stable millisecond pulsars over many years, pulsar timing arrays (PTAs) are positioned to detect and study correlations in the timing behaviour of those pulsars. Gravitational waves (GWs) from supermassive black hole binaries (SMBHBs) are an exciting potentially detectable source of such correlations. We describe a straightforward technique by which a PTA can be phased-up' to form time series of the two polarization modes of GWs coming from a particular direction of the sky. Our technique requires no assumptions regarding the time-domain behaviour of a GW signal. This method has already been used to place stringent bounds on GWs from individual SMBHBs in circular orbits. Here, we describe the methodology and demonstrate the versatility of the technique in searches for a wide variety of GW signals including bursts with unmodelled waveforms. Using the first six years of data from the Parkes Pulsar Timing Array, we conduct an all-sky search for a detectable excess of GW power from any direction. For the lines of sight to several nearby massive galaxy clusters, we carry out a more detailed search for GW bursts with memory, which are distinct signatures of SMBHB mergers. In all cases, we find that the data are consistent with noise.

11. A direct search algorithm for optimization with noisy function evaluations

SciTech Connect

Anderson, E.; Ferris, M.

1994-12-31

In this paper we describe a new direct search algorithm, reminiscent of the Nelder-Mead method, and related to a more recent pattern search algorithm proposed by Torczon. We believe that this method has applications in situations in which each function evaluation is noisy, but in which repeated function evaluations at the same point can be used to progressively reduce the error. For example, this will occur if the objective function value is given as a result of a simulation experiment. We investigate the convergence behaviour of the new algorithm for problems in which each function evaluation returns the true value of the function plus a random error drawn from a Normal distribution.

12. Study of genetic direct search algorithms for function optimization

NASA Technical Reports Server (NTRS)

Zeigler, B. P.

1974-01-01

The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

13. Rotating parallel ray omni-directional integration for instantaneous pressure reconstruction from measured pressure gradient

Liu, Xiaofeng; Siddle-Mitchell, Seth

2015-11-01

This paper presents a novel pressure reconstruction method featuring rotating parallel ray omni-directional integration, as an improvement over the circular virtual boundary integration method introduced by Liu and Katz (2003, 2006, 2008 and 2013) for non-intrusive instantaneous pressure measurement in incompressible flow field. Unlike the virtual boundary omni-directional integration, where the integration path is originated from a virtual circular boundary at a finite distance from the real boundary of the integration domain, the new method utilizes parallel rays, which can be viewed as being originated from a distance of infinity, as guidance for integration paths. By rotating the parallel rays, omni-directional paths with equal weights coming from all directions toward the point of interest at any location within the computation domain will be generated. In this way, the location dependence of the integration weight inherent in the old algorithm will be eliminated. By implementing this new algorithm, the accuracy of the reconstructed pressure for a synthetic rotational flow in terms of r.m.s. error from theoretical values is reduced from 1.03% to 0.30%. Improvement is further demonstrated from the comparison of the reconstructed pressure with that from the Johns Hopkins University isotropic turbulence database (JHTDB). This project is funded by the San Diego State University.

14. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

NASA Technical Reports Server (NTRS)

Povitsky, A.

1998-01-01

In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

15. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

NASA Technical Reports Server (NTRS)

Carroll, Chester C.; Owen, Jeffrey E.

1988-01-01

A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

16. Direct search implications for a custodially-embedded composite top

Chivukula, R. Sekhar; Foadi, Roshan; Foren, Dennis; Simmons, Elizabeth H.

2016-07-01

We assess current experimental constraints on the bidoublet+singlet model of top compositeness previously proposed in the literature. This model extends the Standard Model's spectrum by adding a custodially embedded vectorlike electroweak bidoublet of quarks and a vectorlike electroweak singlet quark. While either of those states alone would produce a model in tension with constraints from precision electroweak data, in combination they can produce a viable model. We show that current precision electroweak data, in the wake of the Higgs discovery, accommodate the model and we explore the impact of direct collider searches for the partners of the top quark.

17. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

NASA Technical Reports Server (NTRS)

Lee, J.; Kim, K.

1991-01-01

A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

18. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

Lee, J.; Kim, K.

A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

19. Parallel machine scheduling with step-deteriorating jobs and setup times by a hybrid discrete cuckoo search algorithm

Guo, Peng; Cheng, Wenming; Wang, Yi

2015-11-01

This article considers the parallel machine scheduling problem with step-deteriorating jobs and sequence-dependent setup times. The objective is to minimize the total tardiness by determining the allocation and sequence of jobs on identical parallel machines. In this problem, the processing time of each job is a step function dependent upon its starting time. An individual extended time is penalized when the starting time of a job is later than a specific deterioration date. The possibility of deterioration of a job makes the parallel machine scheduling problem more challenging than ordinary ones. A mixed integer programming model for the optimal solution is derived. Due to its NP-hard nature, a hybrid discrete cuckoo search algorithm is proposed to solve this problem. In order to generate a good initial swarm, a modified Biskup-Hermann-Gupta (BHG) heuristic called MBHG is incorporated into the population initialization. Several discrete operators are proposed in the random walk of Lévy flights and the crossover search. Moreover, a local search procedure based on variable neighbourhood descent is integrated into the algorithm as a hybrid strategy in order to improve the quality of elite solutions. Computational experiments are executed on two sets of randomly generated test instances. The results show that the proposed hybrid algorithm can yield better solutions in comparison with the commercial solver CPLEX® with a one hour time limit, the discrete cuckoo search algorithm and the existing variable neighbourhood search algorithm.

20. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

NASA Technical Reports Server (NTRS)

1993-01-01

The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

1. Bi-directional series-parallel elastic actuator and overlap of the actuation layers.

PubMed

Furnémont, Raphaël; Mathijssen, Glenn; Verstraten, Tom; Lefeber, Dirk; Vanderborght, Bram

2016-02-01

Several robotics applications require high torque-to-weight ratio and energy efficient actuators. Progress in that direction was made by introducing compliant elements into the actuation. A large variety of actuators were developed such as series elastic actuators (SEAs), variable stiffness actuators and parallel elastic actuators (PEAs). SEAs can reduce the peak power while PEAs can reduce the torque requirement on the motor. Nonetheless, these actuators still cannot meet performances close to humans. To combine both advantages, the series parallel elastic actuator (SPEA) was developed. The principle is inspired from biological muscles. Muscles are composed of motor units, placed in parallel, which are variably recruited as the required effort increases. This biological principle is exploited in the SPEA, where springs (layers), placed in parallel, can be recruited one by one. This recruitment is performed by an intermittent mechanism. This paper presents the development of a SPEA using the MACCEPA principle with a self-closing mechanism. This actuator can deliver a bi-directional output torque, variable stiffness and reduced friction. The load on the motor can also be reduced, leading to a lower power consumption. The variable recruitment of the parallel springs can also be tuned in order to further decrease the consumption of the actuator for a given task. First, an explanation of the concept and a brief description of the prior work done will be given. Next, the design and the model of one of the layers will be presented. The working principle of the full actuator will then be given. At the end of this paper, experiments showing the electric consumption of the actuator will display the advantage of the SPEA over an equivalent stiff actuator. PMID:26813145

2. An Automated Directed Spectral Search Methodology for Small Target Detection

Grossman, Stanley I.

Much of the current efforts in remote sensing tackle macro-level problems such as determining the extent of wheat in a field, the general health of vegetation or the extent of mineral deposits in an area. However, for many of the remaining remote sensing challenges being studied currently, such as border protection, drug smuggling, treaty verification, and the war on terror, most targets are very small in nature - a vehicle or even a person. While in typical macro-level problems the objective vegetation is in the scene, for small target detection problems it is not usually known if the desired small target even exists in the scene, never mind finding it in abundance. The ability to find specific small targets, such as vehicles, typifies this problem. Complicating the analyst's life, the growing number of available sensors is generating mountains of imagery outstripping the analysts' ability to visually peruse them. This work presents the important factors influencing spectral exploitation using multispectral data and suggests a different approach to small target detection. The methodology of directed search is presented, including the use of scene-modeled spectral libraries, various search algorithms, and traditional statistical and ROC curve analysis. The work suggests a new metric to calibrate analysis labeled the analytic sweet spot as well as an estimation method for identifying the sweet spot threshold for an image. It also suggests a new visualization aid for highlighting the target in its entirety called nearest neighbor inflation (NNI). It brings these all together to propose that these additions to the target detection arena allow for the construction of a fully automated target detection scheme. This dissertation next details experiments to support the hypothesis that the optimum detection threshold is the analytic sweet spot and that the estimation method adequately predicts it. Experimental results and analysis are presented for the proposed directed

3. A direct search for energetic electrons produced by laboratory sparks

Carlson, B. E.; Kochkin, P.; van Deursen, A. P. J.; Hansen, R.; Gjesteland, T.; Ostgaard, N.

2012-04-01

High-voltage sparks in the lab unexpectedly emit x-rays with energies up to several hundred keV. These x-rays have been observed repeatedly and can only be produced by bremsstrahlung, impling the presence of a population of energetic electrons. Such energetic electron and x-ray production may be important for the physics of streamers, spark discharges, and lightning, and has been suggested as directly related to the production of terrestrial gamma-ray flashes. We present the results of the first direct search for energetic electrons produced by a lab spark. Small electrically-isolated scintillators are placed at various locations near the spark gap of a 2 MV Marx generator and the resulting signals are recorded. We present results on the spatial, temporal, and statistical variability of signals produced by energetic electrons and compare our results to predictions of energetic electron production from the literature.

4. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

NASA Technical Reports Server (NTRS)

Morgan, Philip E.

2004-01-01

This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

5. Co-ordination of directional overcurrent protection with load current for parallel feeders

SciTech Connect

Wright, J.W.; Lloyd, G.; Hindle, P.J.

1999-11-01

Directional phase overcurrent relays are commonly applied at the receiving ends of parallel feeders or transformer feeders. Their purpose is to ensure full discrimination of main or back-up power system overcurrent protection for a fault near the receiving end of one feeder. This paper reviews this type of relay application and highlights load current setting constraints for directional protection. Such constraints have not previously been publicized in well-known text books. A directional relay current setting constraint that is suggested in some text books is based purely on thermal rating considerations for older technology relays. This constraint may not exist with modern numerical relays. In the absence of any apparent constraint, there is a temptation to adopt lower current settings with modern directional relays in relation to reverse load current at the receiving ends of parallel feeders. This paper identifies the danger of adopting very low current settings without any special relay feature to ensure protection security with load current during power system faults. A system incident recorded by numerical relays is also offered to highlight this danger. In cases where there is a need to infringe the identified constraints an implemented and testing relaying technique is proposed.

6. Evaluation of parallel direct sparse linear solvers in electromagnetic geophysical problems

Puzyrev, Vladimir; Koric, Seid; Wilkin, Scott

2016-04-01

High performance computing is absolutely necessary for large-scale geophysical simulations. In order to obtain a realistic image of a geologically complex area, industrial surveys collect vast amounts of data making the computational cost extremely high for the subsequent simulations. A major computational bottleneck of modeling and inversion algorithms is solving the large sparse systems of linear ill-conditioned equations in complex domains with multiple right hand sides. Recently, parallel direct solvers have been successfully applied to multi-source seismic and electromagnetic problems. These methods are robust and exhibit good performance, but often require large amounts of memory and have limited scalability. In this paper, we evaluate modern direct solvers on large-scale modeling examples that previously were considered unachievable with these methods. Performance and scalability tests utilizing up to 65,536 cores on the Blue Waters supercomputer clearly illustrate the robustness, efficiency and competitiveness of direct solvers compared to iterative techniques. Wide use of direct methods utilizing modern parallel architectures will allow modeling tools to accurately support multi-source surveys and 3D data acquisition geometries, thus promoting a more efficient use of the electromagnetic methods in geophysics.

7. Direct and Inverse Kinematics of a Novel Tip-Tilt-Piston Parallel Manipulator

NASA Technical Reports Server (NTRS)

2004-01-01

Closed-form direct and inverse kinematics of a new three degree-of-freedom (DOF) parallel manipulator with inextensible limbs and base-mounted actuators are presented. The manipulator has higher resolution and precision than the existing three DOF mechanisms with extensible limbs. Since all of the manipulator actuators are base-mounted; higher payload capacity, smaller actuator sizes, and lower power dissipation can be obtained. The manipulator is suitable for alignment applications where only tip, tilt, and piston motions are significant. The direct kinematics of the manipulator is reduced to solving an eighth-degree polynomial in the square of tangent of half-angle between one of the limbs and the base plane. Hence, there are at most 16 assembly configurations for the manipulator. In addition, it is shown that the 16 solutions are eight pairs of reflected configurations with respect to the base plane. Numerical examples for the direct and inverse kinematics of the manipulator are also presented.

8. Direct Search for Dark Matter with DarkSide

Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Cadonati, L.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Hungerford, E. V.; Ianni, Al; Ianni, An; Jollet, C.; Keeter, K.; Kendziora, C.; Kidner, S.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Loer, B.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Ma, Y. Q.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P. D.; Milincic, R.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Musico, P.; Nelson, A.; Odrowski, S.; Okounkova, M.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Papp, L.; Parmeggiano, S.; Parsells, R.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Segreto, E.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A.; Westerdale, S.; Wojcik, M.; Wright, A.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.

2015-11-01

The DarkSide experiment is designed for the direct detection of Dark Matter with a double phase liquid Argon TPC operating underground at Laboratori Nazionali del Gran Sasso. The TPC is placed inside a 30 tons liquid organic scintillator sphere, acting as a neutron veto, which is in turn installed inside a 1 kt water Cherenkov detector. The current detector is running since November 2013 with a 50 kg atmospheric Argon fill and we report here the first null results of a Dark Matter search for a (1422 ± 67) kg.d exposure. This result correspond to a 90% CL upper limit on the WIMP-nucleon cross section of 6.1 × 10-44 cm2 (for a WIMP mass of 100 GeV/c2) and it's currently the most sensitive limit obtained with an Argon target.

9. Light magnetic dark matter in direct detection searches

Del Nobile, Eugenio; Kouvaris, Chris; Panci, Paolo; Sannino, Francesco; Virkajärvi, Jussi

2012-08-01

We study a fermionic Dark Matter particle carrying magnetic dipole moment and analyze its impact on direct detection experiments. In particular we show that it can accommodate the DAMA, CoGeNT and CRESST experimental results. Assuming conservative bounds, this candidate is shown not to be ruled out by the CDMS, XENON and PICASSO experiments. We offer an analytic understanding of how the long-range interaction modifies the experimental allowed regions, in the cross section versus Dark Matter mass parameter space, with respect to the typically assumed contact interaction. Finally, in the context of a symmetric Dark Matter sector, we determine the associated thermal relic density, and further provide relevant constraints imposed by indirect searches and colliders.

10. Direct search for dark matter with DarkSide

SciTech Connect

Agnes, P.

2015-11-16

Here, the DarkSide experiment is designed for the direct detection of Dark Matter with a double phase liquid Argon TPC operating underground at Laboratori Nazionali del Gran Sasso. The TPC is placed inside a 30 tons liquid organic scintillator sphere, acting as a neutron veto, which is in turn installed inside a 1 kt water Cherenkov detector. The current detector is running since November 2013 with a 50 kg atmospheric Argon fill and we report here the first null results of a Dark Matter search for a (1422 ± 67) kg.d exposure. This result correspond to a 90% CL upper limit on the WIMP-nucleon cross section of 6.1 × 10-44 cm2 (for a WIMP mass of 100 GeV/c2) and it's currently the most sensitive limit obtained with an Argon target.

11. Direct search for dark matter with DarkSide

DOE PAGESBeta

Agnes, P.

2015-11-16

Here, the DarkSide experiment is designed for the direct detection of Dark Matter with a double phase liquid Argon TPC operating underground at Laboratori Nazionali del Gran Sasso. The TPC is placed inside a 30 tons liquid organic scintillator sphere, acting as a neutron veto, which is in turn installed inside a 1 kt water Cherenkov detector. The current detector is running since November 2013 with a 50 kg atmospheric Argon fill and we report here the first null results of a Dark Matter search for a (1422 ± 67) kg.d exposure. This result correspond to a 90% CL uppermore » limit on the WIMP-nucleon cross section of 6.1 × 10-44 cm2 (for a WIMP mass of 100 GeV/c2) and it's currently the most sensitive limit obtained with an Argon target.« less

12. Direct WIMP searches with XENON100 and XENON1T

Alfredo Davide, Ferella

2015-05-01

The XENON100 experiment is the second phase of the XENON direct Dark Matter search program. It consists of an ultra-low background double phase (liquid-gas) xenon filled time projection chamber with a total mass of 161 kg (62 in the target region and 99 in the active shield), installed at the Laboratori Nazionali del Gran Sasso (LNGS). Here the results from the 224.6 live days of data taken between March 2011 and April 2012 are reported. The experiment set one of the most stringent limits on the WIMP-nucleon spin-independent cross section to date (2 × 10-45 cm2 for a 55 Gev/c2 WIMP mass at 90 % confidence level) and the most stringent on the spin-dependent WIMP-neutron interaction (3.5 × 10-40 for a 45 GeV/c2 WIMP mass). With the same dataset, XENON100 excludes also solar axion coupling to electrons at gAe > 7.7 × 10-12 for a mass of mAxion <1 keV/c2 and galactic axion couplings by gAe > 1 × 10-12 at a mass range of mAxion = 5-10 keV/c2 (both 90 % C.L.). Moreover an absolute spectral comparison between simulated and measured nuclear recoil distributions of light and charge signals from a 241AmBe source demonstrates a high level of detector and systematics understanding. Finally, the third generation of the XENON experiments, XENON1T, is the first tonne scale direct WIMP search experiment currently under construction. The commissioning phase of XENON1T is expected to start in early 2015 followed, a few months after, by the first science run. The experiment will reach sensitivities on the WIMP-nucleon spin-independent cross section down to 2 ×10-47 cm2 after two years of data taking.

13. Direct methods for banded linear systems on massively parallel processor computers

SciTech Connect

Arbenz, P.; Gander, W.

1995-12-01

The authors discuss direct methods for solving systems of linear equations Ax = b, A {element_of} lR{sup nxn}, on massively parallel processor (MPP) computers. Here, A is a real banded n x n matrix with lower and upper half-bandwidth r and s, respectively. We assume that the matrix A has a narrow band, meaning r + s << n. Only in this case, it is worthwhile taking into account the zero structure of A, i.e. store the matrix by diagonals and modify algorithms.

14. Design and fabrication of diffractive microlens arrays with continuous relief for parallel laser direct writing.

PubMed

Tan, Jiubin; Shan, Mingguang; Zhao, Chenguang; Liu, Jian

2008-04-01

Diffractive microlens arrays with continuous relief are designed, fabricated, and characterized by using Fermat's principle to create an array of spots on the photoresist-coated surface of a substrate for parallel laser direct writing. Experimental results indicate that a diffraction efficiency of 71.4% and a spot size of 1.97 microm (FWHM) can be achieved at normal incidence and a writing laser wavelength of 441.6 nm with an array of F/4 fabricated on fused silica, and the developed array can be used to improve the utilization ratio of writing laser energy. PMID:18382568

15. Information processing in parallel through directionally resolved molecular polarization components in coherent multidimensional spectroscopy

Yan, Tian-Min; Fresch, Barbara; Levine, R. D.; Remacle, F.

2015-08-01

We propose that information processing can be implemented by measuring the directional components of the macroscopic polarization of an ensemble of molecules subject to a sequence of laser pulses. We describe the logic operation theoretically and demonstrate it by simulations. The measurement of integrated stimulated emission in different phase matching spatial directions provides a logic decomposition of a function that is the discrete analog of an integral transform. The logic operation is reversible and all the possible outputs are computed in parallel for all sets of possible multivalued inputs. The number of logic variables of the function is the number of laser pulses used in sequence. The logic function that is computed depends on the chosen chromophoric molecular complex and on its interactions with the solvent and on the two time intervals between the three pulses and the pulse strengths and polarizations. The outputs are the homodyne detected values of the polarization components that are measured in the allowed phase matching macroscopic directions, kl, k l = ∑ i l i k i where ki is the propagation direction of the ith pulse and {li} is a set of integers that encodes the multivalued inputs. Parallelism is inherently implemented because all the partial polarizations that define the outputs are processed simultaneously. The outputs, which are read directly on the macroscopic level, can be multivalued because the high dynamical range of partial polarization measurements by nonlinear coherent spectroscopy allows for fine binning of the signals. The outputs are uniquely related to the inputs so that the logic is reversible.

16. Interchromosomal Homology Searches Drive Directional ALT Telomere Movement and Synapsis

PubMed Central

Cho, Nam Woo; Dilley, Robert L.; Lampson, Michael A.; Greenberg, Roger A.

2014-01-01

Summary Telomere length maintenance is a requisite feature of cellular immortalization and a hallmark of human cancer. While most human cancers express telomerase activity, approximately 10-15% employ a recombination-dependent telomere maintenance pathway known as Alternative Lengthening of Telomeres (ALT) that is characterized by multi-telomere clusters and associated promyelocytic leukemia protein bodies. Here, we show that a DNA double-strand break (DSB) response at ALT telomeres triggers long-range movement and clustering between chromosome termini, resulting in homology-directed telomere synthesis. Damaged telomeres initiate increased random surveillance of nuclear space before displaying rapid directional movement and association with recipient telomeres over micron-range distances. This phenomenon required Rad51 and the Hop2-Mnd1 heterodimer, which are essential for homologous chromosome synapsis during meiosis. These findings implicate a specialized homology searching mechanism in ALT dependent telomere maintenance and provide a molecular basis underlying the preference for recombination between non- sister telomeres during ALT. PMID:25259924

17. Accelerating patch-based directional wavelets with multicore parallel computing in compressed sensing MRI.

PubMed

Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong

2015-06-01

Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds. PMID:25620521

18. Effects of rotation on turbulent convection: Direct numerical simulation using parallel processors

Chan, Daniel Chiu-Leung

A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural

19. Composite dark matter and direct-search experiments

Wallemacq, Quentin

2015-11-01

The results of the direct searches for dark matter are reinterpreted in the framework of composite dark matter, i.e. dark matter particles that form neutral bound states, generically called “dark atoms”. Two different scenarios are presented: milli-interacting dark matter and dark anti-atoms. In both of them, dark matter interacts sufficiently strongly with terrestrial matter to be stopped in it before reaching underground detectors, which are typically located at a depth of 1 km. As they drift towards the center of the Earth because of gravity, these thermal dark atoms are radiatively captured by the atoms of the active medium of underground detectors, which causes the emission of photons that produce the signals through their interactions with the electrons of the medium. This provides a way of reinterpreting the results in terms of electron recoils instead of nuclear recoils. The two models involve milli-charges and are able to reconcile the most contradictory experiments. We determine, for each model, the regions in the parameter space that reproduce the experiments with positive results in consistency with the constraints of the experiments with negative results.

20. Direct Imaging Searches with the Apodizing Phase Plate Coronagraph

Kenworthy, M.; Meshkat, T.; Otten, , G.; Codona, J.

2014-03-01

The sensitivity of direct imaging searches for extrasolar planets is limited by the presence of diffraction rings from the primary star. Coronagraphs are angular filters that minimise these diffraction structures whilst allowing light from faint companions to shine through. The Apodizing Phase Plate (APP; Kenworthy 2007) coronagraph is a simple pupil plane optic that suppresses diffraction over a 180 degree region around each star simultaneously, providing easy beam switching observations and requiring no time consuming optical alignment at the telescope. We will present our results on using the APP at the Very Large Telescope in surveys for extrasolar planets around A/F and debris disk hosting stars in the L' band (3.8 microns) in the Southern Hemisphere, where we reach a contrast of 12 magnitudes at 0.5 arcseconds (Meshkat 2013). In Leiden, we are also developing the next generation of broadband achromatic coronagraphs that can simultaneously image both sides of the star using Vector APPs (Snik 2012, Otten 2012). Recent laboratory results showing the potential of this technology for future ELTs will also be presented.

1. Direct Searches for Scalar Leptoquarks at the Run II Tevatron

SciTech Connect

Ryan, Daniel E

2004-11-01

This dissertation sets new limits on the mass of the scalar leptoquark from direct searches carried out at the Run II CDF detector using data from March 2001 to October 2003. The data analyzed has a total time-integrated measured luminosity of 198 pb{sup -1} of p{bar p} collisions with {radical}s = 1.96 TeV. Leptoquarks are assumed to be pair-produced and to decay into a lepton and a quark of the same generation. They consider two possible leptoquark decays: (1) {beta} = BR(LQ {yields} {mu}q) = 1.0, and (2) {beta} = BR(LQ {yields} {mu}q) = 0.5. For the {beta} = 1 channel, they focus on the signature represented by two isolated high-p{sub T} muons and two isolated high-p{sub T} jets. For the {beta} = 1/2 channel, they focus on the signature represented by one isolated high-p{sub T} muon, large missing transverse energy, and two isolated high-p{sub T} jets. No leptoquark signal is experimentally detected for either signature. Using the next to leading order theoretical cross section for scalar leptoquark production in p{bar p} collisions [1], they set new mass limits on second generation scalar leptoquarks. They exclude the existence of second generation scalar leptoquarks with masses below 221(175) GeV/c{sup 2} for the {beta} = 1(1/2) channels.

2. A DIRECT METHOD TO DETERMINE THE PARALLEL MEAN FREE PATH OF SOLAR ENERGETIC PARTICLES WITH ADIABATIC FOCUSING

SciTech Connect

He, H.-Q.; Wan, W. E-mail: wanw@mail.iggcas.ac.cn

2012-03-01

The parallel mean free path of solar energetic particles (SEPs), which is determined by physical properties of SEPs as well as those of solar wind, is a very important parameter in space physics to study the transport of charged energetic particles in the heliosphere, especially for space weather forecasting. In space weather practice, it is necessary to find a quick approach to obtain the parallel mean free path of SEPs for a solar event. In addition, the adiabatic focusing effect caused by a spatially varying mean magnetic field in the solar system is important to the transport processes of SEPs. Recently, Shalchi presented an analytical description of the parallel diffusion coefficient with adiabatic focusing. Based on Shalchi's results, in this paper we provide a direct analytical formula as a function of parameters concerning the physical properties of SEPs and solar wind to directly and quickly determine the parallel mean free path of SEPs with adiabatic focusing. Since all of the quantities in the analytical formula can be directly observed by spacecraft, this direct method would be a very useful tool in space weather research. As applications of the direct method, we investigate the inherent relations between the parallel mean free path and various parameters concerning physical properties of SEPs and solar wind. Comparisons of parallel mean free paths with and without adiabatic focusing are also presented.

3. Short-term gas dispersion in idealised urban canopy in street parallel with flow direction

Chaloupecká, Hana; Jaňour, Zbyněk; Nosek, Štěpán

2016-03-01

Chemical attacks (e.g. Syria 2014-15 chlorine, 2013 sarine or Iraq 2006-7 chlorine) as well as chemical plant disasters (e.g. Spain 2015 nitric oxide, ferric chloride; Texas 2014 methyl mercaptan) threaten mankind. In these crisis situations, gas clouds are released. Dispersion of gas clouds is the issue of interest investigated in this paper. The paper describes wind tunnel experiments of dispersion from ground level point gas source. The source is situated in a model of an idealised urban canopy. The short duration releases of passive contaminant ethane are created by an electromagnetic valve. The gas cloud concentrations are measured in individual places at the height of the human breathing zone within a street parallel with flow direction by Fast-response Ionisation Detector. The simulations of the gas release for each measurement position are repeated many times under the same experimental set up to obtain representative datasets. These datasets are analysed to compute puff characteristics (arrival, leaving time and duration). The results indicate that the mean value of the dimensionless arrival time can be described as a growing linear function of the dimensionless coordinate in the street parallel with flow direction where the gas source is situated. The same might be stated about the dimensionless leaving time as well as the dimensionless duration, however these fits are worse. Utilising a linear function, we might also estimate some other statistical characteristics from datasets than the datasets means (medians, trimeans). The datasets of the dimensionless arrival time, the dimensionless leaving time and the dimensionless duration can be fitted by the generalized extreme value distribution (GEV) in all sampling positions except one.

4. Job Search as Goal-Directed Behavior: Objectives and Methods

ERIC Educational Resources Information Center

Van Hoye, Greet; Saks, Alan M.

2008-01-01

This study investigated the relationship between job search objectives (finding a new job/turnover, staying aware of job alternatives, developing a professional network, and obtaining leverage against an employer) and job search methods (looking at job ads, visiting job sites, networking, contacting employment agencies, contacting employers, and…

5. Experimental Studies of the Interaction Between a Parallel Shear Flow and a Directionally-Solidifying Front

NASA Technical Reports Server (NTRS)

Zhang, Meng; Maxworthy, Tony

1999-01-01

It has long been recognized that flow in the melt can have a profound influence on the dynamics of a solidifying interface and hence the quality of the solid material. In particular, flow affects the heat and mass transfer, and causes spatial and temporal variations in the flow and melt composition. This results in a crystal with nonuniform physical properties. Flow can be generated by buoyancy, expansion or contraction upon phase change, and thermo-soluto capillary effects. In general, these flows can not be avoided and can have an adverse effect on the stability of the crystal structures. This motivates crystal growth experiments in a microgravity environment, where buoyancy-driven convection is significantly suppressed. However, transient accelerations (g-jitter) caused by the acceleration of the spacecraft can affect the melt, while convection generated from the effects other than buoyancy remain important. Rather than bemoan the presence of convection as a source of interfacial instability, Hurle in the 1960s suggested that flow in the melt, either forced or natural convection, might be used to stabilize the interface. Delves considered the imposition of both a parabolic velocity profile and a Blasius boundary layer flow over the interface. He concluded that fast stirring could stabilize the interface to perturbations whose wave vector is in the direction of the fluid velocity. Forth and Wheeler considered the effect of the asymptotic suction boundary layer profile. They showed that the effect of the shear flow was to generate travelling waves parallel to the flow with a speed proportional to the Reynolds number. There have been few quantitative, experimental works reporting on the coupling effect of fluid flow and morphological instabilities. Huang studied plane Couette flow over cells and dendrites. It was found that this flow could greatly enhance the planar stability and even induce the cell-planar transition. A rotating impeller was buried inside the

6. Direct Dark Matter Search with the XENON100 Experiment

Mei, Yuan

Dark matter, a non-luminous, non-baryonic matter, is thought to constitute 23 % of the matter-energy components in the universe today. Except for its gravitational effects, the existence of dark matter has never been confirmed by any other means and its nature remains unknown. If a hypothetical Weakly Interacting Massive Particle (WIMP) were in thermal equilibrium in the early universe, it could have a relic abundance close to that of dark matter today, which provides a promising particle candidate of dark matter. Minimal Super-Symmetric extensions to the standard model predicts a stable particle with mass in the range 10 GeV/c2 to 1000 GeV/c2, and spin-independent cross-section with ordinary matter nucleon sigmax < 1 x 10--43 cm2. The XENON100 experiment deploys a Dual Phase Liquid Xenon Time Projection Chamber (LXeTPC) of 62 kg liquid xenon as its sensitive volume, to detect scintillation (S1) and ionization (S2) signals from WIMP dark matter particles directly scattering off xenon nuclei. The detector is located underground at Laboratori Nazionali del Gran Sasso (LNGS) in central Italy. 1.4 km of rock (3.7 km water equivalent) reduces the cosmic muon background by a factor of 106. The event-by-event 3D positioning capability of TPC allows volume fiducialization. With the self-shielding power of liquid xenon, as well as a 99 kg liquid xenon active veto, the electromagnetic radiation background is greatly suppressed. By utilizing the difference of (S2/S1) between electronic recoil and nuclear recoil, the expected WIMP signature, a small nuclear recoil energy deposition, could be discriminated from electronic recoil background with high efficiency. XENON100 achieved the lowest background rate (< 2.2 x 10--2 events/kg/day/keV) in the dark matter search region (< 40 keV) among all direct dark matter detectors. With 11.2 days of data, XENON100 already sets the world's best spin-independent WIMP-nucleon cross-section limit of 2.7 x 10--44 cm2 at WIMP mass 50 GeV/c 2

7. The Direct Imaging Search of Exoplanets from Ground and Space

Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian

2015-08-01

Exoplanets search is one of the hottest topics in both modern astronomy and public domain. Until now over 1990 exoplanets have been confirmed mostly by the indirect radial velocity and transiting approaches, yielding several important physical information such as masses and radius. The study of the physics of planet formation and evolution will focus on giant planets through the direct imaging.However, the direct imaging of exoplanets remains challenging, due to the large flux ratio difference and the nearby angular distance. In recent years, the extreme adaptive optics (Ex-AO) coronagraphic instrumentation has been proposed and developed on 8-meter class telescopes, which is optimized for the high-contrast imaging observation from ground, for the giant exoplanets and other faint stellar companions. Gemini Planet Imager (GPI) has recently come to its first light, with a development period over 10 years. The contrast level has been pushed to 10-6. Due to the space limitation or this or other reasons, none professional adaptive optics is available for most of current 3~4 meter class telescopes, which will limit its observation power to some extent, especially in the research of high-contrast imaging of exoplanets.In this presentation, we will report the latest observation results by using our Extreme Adaptive Optics (Ex-AO) as a visiting instrument for high-contrast imaging on ESO’s 3.58-meter NTT telescope at LSO, and on 3.5-meter ARC telescope at Apache Point Observatory, respectively. It has demonstrated the Ex-AO can be used for the scientific research of exoplanets and brown dwarfs. With a update of the currect configuration with critical hardware, the dedicated instrument called as EDICT for imaging research of young giant exoplanets will be presented. Meanwhile, we have fully demonstrated in the lab a contrast on the order of 10-9 in a large detection area, which is a critical technique for future Earth-like exoplanets imaging space missions. And a space

8. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

2010-05-01

The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance

9. Fast String Search on Multicore Processors: Mapping fundamental algorithms onto parallel hardware

SciTech Connect

Scarpazza, Daniele P.; Villa, Oreste; Petrini, Fabrizio

2008-04-01

String searching is one of these basic algorithms. It has a host of applications, including search engines, network intrusion detection, virus scanners, spam filters, and DNA analysis, among others. The Cell processor, with its multiple cores, promises to speed-up string searching a lot. In this article, we show how we mapped string searching efficiently on the Cell. We present two implementations: • The fast implementation supports a small dictionary size (approximately 100 patterns) and provides a throughput of 40 Gbps, which is 100 times faster than reference implementations on x86 architectures. • The heavy-duty implementation is slower (3.3-4.3 Gbps), but supports dictionaries with tens of thousands of strings.

10. SIMPLE-icity in Direct Dark Matter Searches

SciTech Connect

Giuliani, F.; Morlat, T.; Ramos, A. R.; Girard, T. A.; Felizardo da Costa, M.; Marques, J. G.; Martins, R. C.; Miley, Harry S.; Limagne, D.; Waysand, G.

2007-11-01

SIMPLE is the European WIMP search based on Superheated Droplet Detectors (SDDs). An SDD consists of an emulsion of metastable liquid droplets in an organic gel, each of which operates on the same principle of the bubble chamber.

11. Direct numerical simulation of instabilities in parallel flow with spherical roughness elements

NASA Technical Reports Server (NTRS)

Deanna, R. G.

1992-01-01

Results from a direct numerical simulation of laminar flow over a flat surface with spherical roughness elements using a spectral-element method are given. The numerical simulation approximates roughness as a cellular pattern of identical spheres protruding from a smooth wall. Periodic boundary conditions on the domain's horizontal faces simulate an infinite array of roughness elements extending in the streamwise and spanwise directions, which implies the parallel-flow assumption, and results in a closed domain. A body force, designed to yield the horizontal Blasius velocity in the absence of roughness, sustains the flow. Instabilities above a critical Reynolds number reveal negligible oscillations in the recirculation regions behind each sphere and in the free stream, high-amplitude oscillations in the layer directly above the spheres, and a mean profile with an inflection point near the sphere's crest. The inflection point yields an unstable layer above the roughness (where U''(y) is less than 0) and a stable region within the roughness (where U''(y) is greater than 0). Evidently, the instability begins when the low-momentum or wake region behind an element, being the region most affected by disturbances (purely numerical in this case), goes unstable and moves. In compressible flow with periodic boundaries, this motion sends disturbances to all regions of the domain. In the unstable layer just above the inflection point, the disturbances grow while being carried downstream with a propagation speed equal to the local mean velocity; they do not grow amid the low energy region near the roughness patch. The most amplified disturbance eventually arrives at the next roughness element downstream, perturbing its wake and inducing a global response at a frequency governed by the streamwise spacing between spheres and the mean velocity of the most amplified layer.

12. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

ERIC Educational Resources Information Center

Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

1998-01-01

Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

13. Targeted parallel sequencing of the Musa species: searching for an alternative model system for polyploidy studies

Technology Transfer Automated Retrieval System (TEKTRAN)

Modern day genomics holds the promise of solving the complexities of basic plant sciences, and of catalyzing practical advances in plant breeding. While contiguous, "base perfect" deep sequencing is a key module of any genome project, recent advances in parallel next generation sequencing technologi...

14. Direct tabu search algorithm for the fiber Bragg grating distributed strain sensing

Karim, F.; Seddiki, O.

2010-09-01

A direct tabu search (DTS) algorithm used for determining the strain profile along a fiber Bragg grating (FBG) from its reflection spectrum has been demonstrated. By combining the transfer matrix method (TMM) for calculating the reflection spectrum of an FBG and the DTS method, we obtain a new method for the distributed sensing. Direct search based strategies are used to direct a tabu search. These strategies are based on a new pattern search procedure called an adaptive pattern search (APS). In addition, the well-known Nelder-Mead (NME) algorithm is used as a local search method in the final stage of the optimization process. The numerical simulations show good agreement between the original and the reconstructed strain profiles.

15. A direct search for neutralino production at LEP

Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Clarke, P. E. L.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D. J. P.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Harrus, I.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Humbert, R.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lehto, M. H.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Ma, J.; Macbeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McNutt, J. R.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Singh, P.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; Van den plas, D.; Van Dalen, G. J.; Vasseur, G.; Virtue, C. J.; von der Schmitt, H.; von Krogh, J.; Wagner, A.; Wahl, C.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wells, P. S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Toshida, T.; Zeuner, W.; Zorn, G. T.; OPAL Collaboration

1990-09-01

A search has been performed for the production of neutralinos ( χ, χ‧) in e +e - annihilation at energies near the Z 0 pole. No evidence for these particles was found either in searches for events with two acoplanar jets, low visible energy, and missing pt (sensitive to Z0→χχ‧→χχ foverlinef) or in searches for single-photon events (sensitive to Z 0→ χχ‧→ χχγ). Model independent upper limits (at the 95% CL) on the branching ratio for the decay mode Z 0 → χχ‧ of a few 10 -4 are obtained for most of the range of neutralino masses that is kinematically accessible at LEP energies. Upper limits on the mixing factor of neutralinos are also placed as a function of the neutralino masses.

16. A direct search for new charged heavy leptons at LEP

Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Bavaria, G.; Beck, F.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Bloddworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Cohen, I.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Davies, O. W.; Deninno, M. M.; Dieckmann, A.; Dittmar, M.; Dixit, M. S.; Duchesneau, D.; Duchovni, E.; Duerdoth, I. P.; Dumas, D.; El Mamouni, H.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gandois, B.; Ganel, O.; Gary, J. W.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grosse-Wiesmann, P.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Hart, J.; Hattersley, P. M.; Hatzifotiadou, D.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Heintze, J.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Hinde, P. S.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Igo-Kemenes, P.; Imori, M.; Imrie, D. C.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jin, E.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Köpke, L.; Kokott, T. P.; Koshiba, M.; Kowalewski, R.; Kreutzmann, H.; von Krogh, J.; Kroll, J.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Lasota, M. M. B.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Ma, J.; MacBeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McPherson, A. C.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michellini, A.; Middleton, R. P.; Mikenberg, G.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Muller, A.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Patrick, G. N.; Pawley, S. J.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Possoz, A.; Pouladdej, A.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Roehner, F.; Rollnik, A.; Roney, J. M.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; von der Schmitt, H.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Spreadbury, E. J.; Springer, R. W.; Sproston, M.; Stephens, K.; Steuerer, J.; Stier, H. E.; Ströhmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Tsukamoto, T.; Turner, M. F.; Tysarczyk, G.; van den Plas, D.; Vandalen, G. J.; Virtue, C. J.; Wagner, A.; Wahl, C.; Wang, H.; Ward, C. P.; Ward, D. R.; Waterhouse, J.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yamashita, H.; Yang, Y.; Yekutieli, G.; Zeuner, W.; Zorn, G. T.; Zylberajch, S.

1990-04-01

Results are presented from a search for a new charged heavy lepton in e+e- annihilation. The data were taken with the OPAL detector at LEP during a scan of the Z0 resonance. Two independent search techniques were used, one looking for events with large missing energy and missing momentum transverse to the beam, and the other for events with isolated energetic leptons. Two candidate events, consistent with expected background, were found in the first search; none was found in the second. These results allow the exclusion at the 95% confidence level of a charged heavy lepton of mass less than 44.3 GeV/c2 if it is assumed to have a massless neutrino partner. Limits are also presented for the case of a massive neutrino.

17. Direct visualization of a DNA glycosylase searching for damage.

PubMed

Chen, Liwei; Haushalter, Karl A; Lieber, Charles M; Verdine, Gregory L

2002-03-01

DNA glycosylases preserve the integrity of genetic information by recognizing damaged bases in the genome and catalyzing their excision. It is unknown how DNA glycosylases locate covalently modified bases hidden in the DNA helix amongst vast numbers of normal bases. Here we employ atomic-force microscopy (AFM) with carbon nanotube probes to image search intermediates of human 8-oxoguanine DNA glycosylase (hOGG1) scanning DNA. We show that hOGG1 interrogates DNA at undamaged sites by inducing drastic kinks. The sharp DNA bending angle of these non-lesion-specific search intermediates closely matches that observed in the specific complex of 8-oxoguanine-containing DNA bound to hOGG1. These findings indicate that hOGG1 actively distorts DNA while searching for damaged bases. PMID:11927259

18. Direct dark matter search with XMASS: modulation analysis

Kobayashi, Kazuyoshi; XMASS Collaboration

2016-05-01

Dark matter search by means of the annual modulation was done using large single-phase liquid-xenon detector, XMASS. With the data from November-2013 to March-2015, model independent analysis showed a weak modulation effect, however, the result can be explained by a fluctuation of the background at the level of 7-17%. If we assume the standard weekly interacting massive particles dark matter, we exclude almost all the allowed region claimed by the DAMA/LIBRA experiment. This is the first extensive search over their allowed region exploiting the annual modulation with high statistics data.

19. Enhancements, Parallelization and Future Directions of the V3FIT 3-D Equilibrium Reconstruction Code

Cianciosa, M. R.; Hanson, J. D.; Maurer, D. A.; Hartwell, G. J.; Archmiller, M. C.; Ma, X.; Herfindal, J.

2014-10-01

Three-dimensional equilibrium reconstruction is spreading beyond its original application to stellarators. Three-dimensional effects in nominally axisymmetric systems, including quasi-helical states in reversed field pinches and error fields in tokamaks, are becoming increasingly important. V3FIT is a fully three dimensional equilibrium reconstruction code in widespread use throughout the fusion community. The code has recently undergone extensive revision to prepare for the next generation of equilibrium reconstruction problems. The most notable changes are the abstraction of the equilibrium model, the propagation of experimental errors to the reconstructed results, support for multicolor soft x-ray emissivity cameras, and recent efforts to add parallelization for efficient computation on multi-processor system. Work presented will contain discussions on these new capabilities. We will compare probability distributions of reconstructed parameters with results from whole shot reconstructions. We will show benchmarking and profiling results of initial performance improvements through the addition of OpenMP and MPI support. We will discuss future directions of the V3FIT code including steps taken for support of the W-7X stellarator. Work supported by US. Department of Energy Grant No. DEFG-0203-ER-54692B.

20. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

2008-06-01

An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

1. PANMIN: sequential and parallel global optimization procedures with a variety of options for the local search strategy

Theos, F. V.; Lagaris, I. E.; Papageorgiou, D. G.

2004-05-01

We present two sequential and one parallel global optimization codes, that belong to the stochastic class, and an interface routine that enables the use of the Merlin/MCL environment as a non-interactive local optimizer. This interface proved extremely important, since it provides flexibility, effectiveness and robustness to the local search task that is in turn employed by the global procedures. We demonstrate the use of the parallel code to a molecular conformation problem. Program summaryTitle of program: PANMIN Catalogue identifier: ADSU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: PANMIN is designed for UNIX machines. The parallel code runs on either shared memory architectures or on a distributed system. The code has been tested on a SUN Microsystems ENTERPRISE 450 with four CPUs, and on a 48-node cluster under Linux, with both the GNU g77 and the Portland group compilers. The parallel implementation is based on MPI and has been tested with LAM MPI and MPICH Installation: University of Ioannina, Greece Programming language used: Fortran-77 Memory required to execute with typical data: Approximately O( n2) words, where n is the number of variables No. of bits in a word: 64 No. of processors used: 1 or many Has the code been vectorised or parallelized?: Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 147163 No. of lines in distributed program, including the test data, etc.: 14366 Distribution format: gzipped tar file Nature of physical problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be

2. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators

SciTech Connect

Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; McCabe, Christopher

2015-01-09

We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: mDM, Mmed , gDM and gq, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establish an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.

3. Characterising dark matter searches at colliders and direct detection experiments: Vector mediators

DOE PAGESBeta

Buchmueller, Oliver; Dolan, Matthew J.; Malik, Sarah A.; McCabe, Christopher

2015-01-09

We introduce a Minimal Simplified Dark Matter (MSDM) framework to quantitatively characterise dark matter (DM) searches at the LHC. We study two MSDM models where the DM is a Dirac fermion which interacts with a vector and axial-vector mediator. The models are characterised by four parameters: mDM, Mmed , gDM and gq, the DM and mediator masses, and the mediator couplings to DM and quarks respectively. The MSDM models accurately capture the full event kinematics, and the dependence on all masses and couplings can be systematically studied. The interpretation of mono-jet searches in this framework can be used to establishmore » an equal-footing comparison with direct detection experiments. For theories with a vector mediator, LHC mono-jet searches possess better sensitivity than direct detection searches for light DM masses (≲5 GeV). For axial-vector mediators, LHC and direct detection searches generally probe orthogonal directions in the parameter space. We explore the projected limits of these searches from the ultimate reach of the LHC and multi-ton xenon direct detection experiments, and find that the complementarity of the searches remains. In conclusion, we provide a comparison of limits in the MSDM and effective field theory (EFT) frameworks to highlight the deficiencies of the EFT framework, particularly when exploring the complementarity of mono-jet and direct detection searches.« less

4. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

PubMed Central

2014-01-01

Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

5. The search for high level parallelism for the iterative solution of large sparse linear systems

SciTech Connect

Young, D.M.

1988-07-01

In this paper the author is concerned with the numerical solution, based on iterative methods, of large sparse systems of linear algebraic equations of the type which arise in the numerical solution of elliptic and parabolic partial differential equations by finite difference or finite element methods. He considers linear systems of the form Au = b where A is a given N x N matrix which is large and sparse and where b is a given N x 1 column vector. He will assumes that A is symmetric and positive definite (SPD). He considers iterative algorithms which consist of a basic iterative method, such as the Richardson, Jacobi, SSOR or incomplete Cholesky method, combined with an acceleration procedure such as Chebyshev acceleration or conjugate gradient acceleration. The object of this paper is, however, to examine some high-level methods for achieving parallelism. Such techniques involve only matrix/vector operations and do not involve working with blocks of the matrix, subdividing the region, or using different meshes. It is expected that if effective high-level methods could be developed, they could be combined with block and domain decomposition methods, and related methods, to obtain even greater speedups. It is also expected that by working at a higher level it will eventually be possible to develop general purpose software for parallel machines similar to the ITPACK software packages which have already been developed for sequential and vector machines. The discussion here is primarily devoted to describing various techniques which the author and others have considered for obtaining high-level parallelism. The author plans to continue research on these techniques and eventually to develop algorithms and programs for multiprocessors based on them.

6. High-performance hardware implementation of a parallel database search engine for real-time peptide mass fingerprinting

PubMed Central

Bogdán, István A.; Rivers, Jenny; Beynon, Robert J.; Coca, Daniel

2008-01-01

Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk PMID:18453553

7. Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash

Luo, Y.; Chen, S. Y.; Huang, J.; Xiong, Y. Y.; Tang, C. J.

2016-04-01

The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognized as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.

8. Status of the DAMIC Direct Dark Matter Search Experiment

SciTech Connect

Aguilar-Arevalo, A.; et al.

2015-09-30

The DAMIC experiment uses fully depleted, high resistivity CCDs to search for dark matter particles. With an energy threshold $\\sim$50 eV$_{ee}$, and excellent energy and spatial resolutions, the DAMIC CCDs are well-suited to identify and suppress radioactive backgrounds, having an unrivaled sensitivity to WIMPs with masses $<$6 GeV/$c^2$. Early results motivated the construction of a 100 g detector, DAMIC100, currently being installed at SNOLAB. This contribution discusses the installation progress, new calibration efforts near the threshold, a preliminary result with 2014 data, and the prospects for physics results after one year of data taking.

9. A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L

SciTech Connect

Yoo, A; Chow, E; Henderson, K; McLendon, W; Hendrickson, B; Catalyurek, U

2005-07-19

Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth-first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional 1D (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and 1D partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported.

10. Direct and indirect searches for anomalous beta decay

Nistor, Jonathan M.

We present a treatment of time-varying nuclear transition rates intended to guide future experimental searches, focusing primarily on the concept of "self-induce decay.'' This investigation stems from a series of recent reports that suggest that the decay rates of several isotopes may have been influenced by solar activity (perhaps by solar neutrinos). A mechanism in which (anti)neutrinos can influence the decay process suggests that a sample of decaying nuclei emitting neutrinos could affect its own rate of decay. Past experiments have searched for this "self-induced decay" (SID) effect by measuring deviations from the expected decay rate for highly active samples of varying geometries. Here, we further develop a SID formalism which takes into account the activation process. In the course of the treatment, the observation is made that the SID behavior closely resembles the behavior of rate-related losses due to dead-time, and hence that standard dead-time corrections can result in the removal of possible SID-related behavior. Additionally, we discuss a long-running dark matter (DM) experiment which observes an annual signal predicted by standard DM models. Here, we consider the possibility that the annual signal seen by the DAMA collaboration, and interpreted by them as evidence for dark matter, may in fact be due to the radioactive contaminant 40K, which is known to be present in their detector. We also consider the possibility that part of the DAMA signal may arise from relic big-bang neutrinos.

11. Current Results and Future Directions of the Pulsar Search Collaboratory

Heatherly, Sue Ann; Rosen, R.; McLaughlin, M.; Lorimer, D.

2011-01-01

The Pulsar Search Collaboratory (PSC) is a joint partnership between the National Radio Astronomy Observatory (NRAO) and West Virginia University (WVU). The ultimate goal of the PSC is to interest students in science, technology, engineering, mathematics (STEM) fields by engaging them in conducting authentic scientific research-specifically the search for new pulsars. Of the 33 schools in the original PSC program, 13 come from rural school districts; one third of these are from schools where over 50% participate in the Free/Reduced School Lunch program. We are reaching first generation college-goers. For students, the program succeeds in building confidence in students, rapport with the scientists involved in the project, and greater comfort with team-work. We see additional gains in girls, as they see themselves more as scientists after participating in the PSC program, which is an important predictor of success in STEM fields. The PSC has had several scientific successes as well. To date, PSC students have made two astronomical discoveries: a 4.8-s pulsar and bright radio burst of astrophysical origin, most likely from a sporadic neutron star. We will report on the status of the project including new evaluation data. We will also describe PSC-West, an experiment to involve schools in Illinois and Wisconsin using primarily online tools for professional development of teachers and coaching of students. Knowledge gained through our efforts with PSC-West will assist the PSC team in scaling up the project.

12. Lick Observatory Optical SETI: targeted search and new directions.

PubMed

Stone, R P S; Wright, S A; Drake, F; Muñoz, M; Treffers, R; Werthimer, D

2005-10-01

Lick Observatory's Optical SETI (search for extraterrestrial intelligence) program has been in regular operation for 4.5 years. We have observed 4,605 stars of spectral types F-M within 200 light-years of Earth. Occasionally, we have appended objects of special interest, such as stars with known planetary systems. We have observed 14 candidate signals ("triple coincidences"), all but one of which are explained by transient local difficulties. Additional observations of the remaining candidate have failed to confirm arriving pulse events. We now plan to proceed in a more economical manner by operating in an unattended drift scan mode. Between operational and equipment modifications, efficiency will more than double. PMID:16225433

13. Simulated Milky Way analogues: implications for dark matter direct searches

Bozorgnia, Nassim; Calore, Francesca; Schaller, Matthieu; Lovell, Mark; Bertone, Gianfranco; Frenk, Carlos S.; Crain, Robert A.; Navarro, Julio F.; Schaye, Joop; Theuns, Tom

2016-05-01

We study the implications of galaxy formation on dark matter direct detection using high resolution hydrodynamic simulations of Milky Way-like galaxies simulated within the EAGLE and APOSTLE projects. We identify Milky Way analogues that satisfy observational constraints on the Milky Way rotation curve and total stellar mass. We then extract the dark matter density and velocity distribution in the Solar neighbourhood for this set of Milky Way analogues, and use them to analyse the results of current direct detection experiments. For most Milky Way analogues, the event rates in direct detection experiments obtained from the best fit Maxwellian distribution (with peak speed of 223–289 km/s) are similar to those obtained directly from the simulations. As a consequence, the allowed regions and exclusion limits set by direct detection experiments in the dark matter mass and spin-independent cross section plane shift by a few GeV compared to the Standard Halo Model, at low dark matter masses. For each dark matter mass, the halo-to-halo variation of the local dark matter density results in an overall shift of the allowed regions and exclusion limits for the cross section. However, the compatibility of the possible hints for a dark matter signal from DAMA and CDMS-Si and null results from LUX and SuperCDMS is not improved.

14. Direct observation of TALE protein dynamics reveals a two-state search mechanism

PubMed Central

Cuculis, Luke; Abil, Zhanar; Zhao, Huimin; Schroeder, Charles M.

2015-01-01

Transcription activator-like effector (TALE) proteins are a class of programmable DNA-binding proteins for which the fundamental mechanisms governing the search process are not fully understood. Here we use single-molecule techniques to directly observe TALE search dynamics along DNA templates. We find that TALE proteins are capable of rapid diffusion along DNA using a combination of sliding and hopping behaviour, which suggests that the TALE search process is governed in part by facilitated diffusion. We also observe that TALE proteins exhibit two distinct modes of action during the search process—a search state and a recognition state—facilitated by different subdomains in monomeric TALE proteins. Using TALE truncation mutants, we further demonstrate that the N-terminal region of TALEs is required for the initial non-specific binding and subsequent rapid search along DNA, whereas the central repeat domain is required for transitioning into the site-specific recognition state. PMID:26027871

15. Direct observation of TALE protein dynamics reveals a two-state search mechanism

Cuculis, Luke; Abil, Zhanar; Zhao, Huimin; Schroeder, Charles M.

2015-06-01

Transcription activator-like effector (TALE) proteins are a class of programmable DNA-binding proteins for which the fundamental mechanisms governing the search process are not fully understood. Here we use single-molecule techniques to directly observe TALE search dynamics along DNA templates. We find that TALE proteins are capable of rapid diffusion along DNA using a combination of sliding and hopping behaviour, which suggests that the TALE search process is governed in part by facilitated diffusion. We also observe that TALE proteins exhibit two distinct modes of action during the search process--a search state and a recognition state--facilitated by different subdomains in monomeric TALE proteins. Using TALE truncation mutants, we further demonstrate that the N-terminal region of TALEs is required for the initial non-specific binding and subsequent rapid search along DNA, whereas the central repeat domain is required for transitioning into the site-specific recognition state.

16. 3D frequency modeling of elastic seismic wave propagation via a structured massively parallel direct Helmholtz solver

Wang, S.; De Hoop, M. V.; Xia, J.; Li, X.

2011-12-01

We consider the modeling of elastic seismic wave propagation on a rectangular domain via the discretization and solution of the inhomogeneous coupled Helmholtz equation in 3D, by exploiting a parallel multifrontal sparse direct solver equipped with Hierarchically Semi-Separable (HSS) structure to reduce the computational complexity and storage. In particular, we are concerned with solving this equation on a large domain, for a large number of different forcing terms in the context of seismic problems in general, and modeling in particular. We resort to a parsimonious mixed grid finite differences scheme for discretizing the Helmholtz operator and Perfect Matched Layer boundaries, resulting in a non-Hermitian matrix. We make use of a nested dissection based domain decomposition, and introduce an approximate direct solver by developing a parallel HSS matrix compression, factorization, and solution approach. We cast our massive parallelization in the framework of the multifrontal method. The assembly tree is partitioned into local trees and a global tree. The local trees are eliminated independently in each processor, while the global tree is eliminated through massive communication. The solver for the inhomogeneous equation is a parallel hybrid between multifrontal and HSS structure. The computational complexity associated with the factorization is almost linear with the size of the Helmholtz matrix. Our numerical approach can be compared with the spectral element method in 3D seismic applications.

17. Search for Exoplanets around Young Stellar Objects by Direct Imaging

Uyama, Taichi; Tamura, Motohide; Hashimoto, Jun; Kuzuhara, Masayuki

2015-12-01

SEEDS project, exploring exoplanets and protoplanetary disks with Subaru/HiCIAO, has observed about 500 stars by Direct Imaging from 2009 Dec to 2015 Apr. Among these targets we explore around Young Stellar Objects (YSOs; age ≦ 10Myr) which often have the protoplanetary disks where planets are being formed in order to detect young exoplanets and to understand the formation process. We analyzed 66 YSOs (about 100 data in total) with LOCI data reduction. We will report the results (companion candidates and detection limit) of our exploration.

18. Energy partition and distribution of excited species in direction-sensitive detectors for WIMP searches

Hitachi, A.

2013-12-01

The Bragg-like curve for compounds is introduced for directional detection of galactic dark matter. The slow ion collisions are discussed in relation to direct dark matter searches. The Coulomb effect and the threshold effect in stopping power theory are examined. Ionization via molecular orbit (MO) is suggested for an additional contribution to the electronic stopping power at very slow energy.

19. Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

NASA Technical Reports Server (NTRS)

Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

1995-01-01

The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

20. Taming astrophysical bias in direct dark matter searches

SciTech Connect

Pato, Miguel; Strigari, Louis E.; Trotta, Roberto; Bertone, Gianfranco E-mail: strigari@stanford.edu E-mail: gf.bertone@gmail.com

2013-02-01

We explore systematic biases in the identification of dark matter in future direct detection experiments and compare the reconstructed dark matter properties when assuming a self-consistent dark matter distribution function and the standard Maxwellian velocity distribution. We find that the systematic bias on the dark matter mass and cross-section determination arising from wrong assumptions for its distribution function is of order ∼ 1σ. A much larger systematic bias can arise if wrong assumptions are made on the underlying Milky Way mass model. However, in both cases the bias is substantially mitigated by marginalizing over galactic model parameters. We additionally show that the velocity distribution can be reconstructed in an unbiased manner for typical dark matter parameters. Our results highlight both the robustness of the dark matter mass and cross-section determination using the standard Maxwellian velocity distribution and the importance of accounting for astrophysical uncertainties in a statistically consistent fashion.

1. Current Trends in Numerical Simulation for Parallel Engineering Environments New Directions and Work-in-Progress

SciTech Connect

Trinitis, C; Schulz, M

2006-06-29

In today's world, the use of parallel programming and architectures is essential for simulating practical problems in engineering and related disciplines. Remarkable progress in CPU architecture, system scalability, and interconnect technology continues to provide new opportunities, as well as new challenges for both system architects and software developers. These trends are paralleled by progress in parallel algorithms, simulation techniques, and software integration from multiple disciplines. ParSim brings together researchers from both application disciplines and computer science and aims at fostering closer cooperation between these fields. Since its successful introduction in 2002, ParSim has established itself as an integral part of the EuroPVM/MPI conference series. In contrast to traditional conferences, emphasis is put on the presentation of up-to-date results with a short turn-around time. This offers a unique opportunity to present new aspects in this dynamic field and discuss them with a wide, interdisciplinary audience. The EuroPVM/MPI conference series, as one of the prime events in parallel computation, serves as an ideal surrounding for ParSim. This combination enables the participants to present and discuss their work within the scope of both the session and the host conference. This year, eleven papers from authors in nine countries were submitted to ParSim, and we selected five of them. They cover a wide range of different application fields including gas flow simulations, thermo-mechanical processes in nuclear waste storage, and cosmological simulations. At the same time, the selected contributions also address the computer science side of their codes and discuss different parallelization strategies, programming models and languages, as well as the use nonblocking collective operations in MPI. We are confident that this provides an attractive program and that ParSim will be an informal setting for lively discussions and for fostering new

2. Directed Searches for Broadband Extended Gravitational Wave Emission in Nearby Energetic Core-collapse Supernovae

van Putten, Maurice H. P. M.

2016-03-01

Core-collapse supernovae (CC-SNe) are factories of neutron stars and stellar-mass black holes. SNe Ib/c stand out as potentially originating in relatively compact stellar binaries and they have a branching ratio of about 1% into long gamma-ray bursts. The most energetic events probably derive from central engines harboring rapidly rotating black holes, wherein the accretion of fall-back matter down to the innermost stable circular orbit (ISCO) offers a window into broadband extended gravitational wave emission (BEGE). To search for BEGE, we introduce a butterfly filter in time-frequency space by time-sliced matched filtering. To analyze long epochs of data, we propose using coarse-grained searches followed by high-resolution searches on events of interest. We illustrate our proposed coarse-grained search on two weeks of LIGO S6 data prior to SN 2010br (z = 0.002339) using a bank of up to 64,000 templates of one-second duration covering a broad range in chirp frequencies and bandwidth. Correlating events with signal-to-noise ratios > 6 from the LIGO L1 and H1 detectors reduces the total to a few events of interest. Lacking any further properties reflecting a common excitation by broadband gravitational radiation, we disregarded these as spurious. This new pipeline may be used to systematically search for long-duration chirps in nearby CC-SNe from robotic optical transient surveys using embarrassingly parallel computing.

3. Parallel direct numerical simulation of wake vortex detection using monostatic and bistatic radio acoustic sounding systems

Boluriaan Esfahaani, Said

A parallel two-dimensional code is developed in this thesis to numerically simulate wake vortex detection using a Radio Acoustic Sounding System (RASS). The Maxwell equations for media with non-uniform permittivity and the linearized Euler equations for media with non-uniform mean flow are the main framework for the simulations. The code is written in Fortran 90 with the Message Passing Interface (MPI) for parallel implementation. The main difficulty encountered with a time accurate simulation of a RASS is the number of samples required to resolve the Doppler shift in the scattered electromagnetic signal. Even for a 1D simulation with a typical scatterer size, the CPU time required to run the code is far beyond currently available computer resources. Two solutions that overcome this problem are described. In the first the actual electromagnetic wave propagation speed is replaced with a much lower value. This allows an explicit, time accurate numerical scheme to be used. In the second the governing differential equations are recast in order to remove the carrier frequency and solve only for the frequency shift using an implicit scheme with large time steps. The numerical stability characteristics of the resulting discretized equation with complex coefficients are examined. A number of cases for both the monostatic and bistatic configurations are considered. First, a uniform mean flow is considered and the RASS simulation is performed for two different types of incident acoustic field, namely a short single frequency acoustic pulse and a continuous broadband acoustic source. Both the explicit and implicit schemes are examined and the mean flow velocity is determined from the spectrum of the backscattered electromagnetic signal with very good accuracy. Second, the Taylor and Oseen vortex models are considered and their velocity field along the incident electromagnetic beam is retrieved. The Abel transform is then applied to the velocity profiles determined by both

4. Chaining direct memory access data transfer operations for compute nodes in a parallel computer

DOEpatents

Archer, Charles J.; Blocksome, Michael A.

2010-09-28

Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet.

5. Electro-optic directed XOR logic circuits based on parallel-cascaded micro-ring resonators.

PubMed

Tian, Yonghui; Zhao, Yongpeng; Chen, Wenjie; Guo, Anqi; Li, Dezhao; Zhao, Guolin; Liu, Zilong; Xiao, Huifu; Liu, Guipeng; Yang, Jianhong

2015-10-01

We report an electro-optic photonic integrated circuit which can perform the exclusive (XOR) logic operation based on two silicon parallel-cascaded microring resonators (MRRs) fabricated on the silicon-on-insulator (SOI) platform. PIN diodes embedded around MRRs are employed to achieve the carrier injection modulation. Two electrical pulse sequences regarded as two operands of operations are applied to PIN diodes to modulate two MRRs through the free carrier dispersion effect. The final operation result of two operands is output at the Output port in the form of light. The scattering matrix method is employed to establish numerical model of the device, and numerical simulator SG-framework is used to simulate the electrical characteristics of the PIN diodes. XOR operation with the speed of 100Mbps is demonstrated successfully. PMID:26480148

6. Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer

SciTech Connect

Blocksome, Michael A

2015-02-17

Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.

7. Oscillation modes of direct current microdischarges with parallel-plate geometry

SciTech Connect

Stefanovic, Ilija; Kuschel, Thomas; Winter, Joerg; Skoro, Nikola; Maric, Dragana; Petrovic, Zoran Lj

2011-10-15

Two different oscillation modes in microdischarge with parallel-plate geometry have been observed: relaxation oscillations with frequency range between 1.23 and 2.1 kHz and free-running oscillations with 7 kHz frequency. The oscillation modes are induced by increasing power supply voltage or discharge current. For a given power supply voltage, there is a spontaneous transition from one to other oscillation mode and vice versa. Before the transition from relaxation to free-running oscillations, the spontaneous increase of oscillation frequency of relaxation oscillations form 1.3 kHz to 2.1 kHz is measured. Fourier transform spectra of relaxation oscillations reveal chaotic behavior of microdischarges. Volt-ampere (V-A) characteristics associated with relaxation oscillations describes periodical transition between low current, diffuse discharge, and normal glow. However, free-running oscillations appear in subnormal glow only.

8. Formalizing dependency directed backtracking and explanation based learning in refinement search

SciTech Connect

Kambhampati, S.

1996-12-31

The ideas of dependency directed backtracking (DDB) and explanation based learning (EBL) have developed independently in constraint satisfaction, planning and problem solving communities. In this paper, I formalize and unify these ideas under the task-independent framework of refinement search, which can model the search strategies used in both planning and constraint satisfaction. I show that both DDB and EBL depend upon the common theory of explaining search failures, and regressing them to higher levels of the search tree. The relevant issues of importance include (a) how the failures are explained and (b) how many failure explanations are remembered. This task-independent understanding of DDB and EBL helps support cross-fertilization of ideas among Constraint Satisfaction, Planning and Explanation-Based Learning communities.

9. Gravitational focusing and substructure effects on the rate modulation in direct dark matter searches

SciTech Connect

Nobile, Eugenio Del; Gelmini, Graciela B.; Witte, Samuel J.

2015-08-21

We study how gravitational focusing (GF) of dark matter by the Sun affects the annual and biannual modulation of the expected signal in non-directional direct dark matter searches, in the presence of dark matter substructure in the local dark halo. We consider the Sagittarius stream and a possible dark disk, and show that GF suppresses some, but not all, of the distinguishing features that would characterize substructure of the dark halo were GF neglected.

10. Using the Self-Directed Search: Career Explorer with High-Risk Middle School Students

ERIC Educational Resources Information Center

Osborn, Debra S.; Reardon, Robert C.

2006-01-01

The Self-Directed Search: Career Explorer was used with 98 (95% African American) high-risk middle school students as part of 14 structured career groups based on Cognitive Information Processing theory. Results and implications are presented on the outcomes of this program.

11. Diagnostic Use of Holland's Self-Directed Search with University Students.

ERIC Educational Resources Information Center

Christensen, Kathleen C.; Sedlacek, William E.

This study explores the use of a self-counseling device, Holland's Self-Directed Search (SDS), as a diagnostic tool in identifying students who have encountered difficulties in college but persist in their attendance when they may have been better suited to vocational training programs. Thirty-seven students in the University of Maryland Office of…

12. The Influence of Item Response Indecision on the Self-Directed Search

ERIC Educational Resources Information Center

Sampson, James P., Jr.; Shy, Jonathan D.; Hartley, Sarah Lucas; Reardon, Robert C.; Peterson, Gary W.

2009-01-01

Students (N = 247) responded to Self-Directed Search (SDS) per the standard response format and were also instructed to record a question mark (?) for items about which they were uncertain (item response indecision [IRI]). The initial responses of the 114 participants with a (?) were then reversed and a second SDS summary code was obtained and…

13. Psychometric Properties of the Chinese Self-Directed Search (1994 Edition)

ERIC Educational Resources Information Center

Yang, Weiwei; Lance, Charles E.; Hui, Harry C.

2006-01-01

In this study, we (a) examined the measurement equivalence/invariance (ME/I) of the Chinese Self-Directed Search (SDS; 1994 edition) across gender and geographic regions (Mainland China vs. Hong Kong); (b) assessed the construct validity of the Chinese SDS using Widaman's (1985, 1992) MTMM framework; and (c) determined whether vocational interests…

14. Congruency between Occupational Daydreams and Self Directed Search (SDS) Scores among College Students

ERIC Educational Resources Information Center

Miller, Mark J.; Springer, Thomas P.; Tobacyk, Jerome; Wells, Don

2004-01-01

In this study, the relationship of expressed occupational daydreams and scores on the Self-Directed Search (SDS) were examined. Results were consistent with Holland's theory of careers. Implications for career counselors are discussed. Students were asked to provide specific biographical data (i. e., age, gender, race) and to write down their…

15. Twin Similarities in Holland Types as Shown by Scores on the Self-Directed Search

ERIC Educational Resources Information Center

Chauvin, Ida; McDaniel, Janelle R.; Miller, Mark J.; King, James M.; Eddlemon, Ondie L. M.

2012-01-01

This study examined the degree of similarity between scores on the Self-Directed Search from one set of identical twins. Predictably, a high congruence score was found. Results from a biographical sheet are discussed as well as implications of the results for career counselors.

16. Status and Prospects of the EDELWEISS-III Direct WIMP Search Experiment

Juillard, A.

2016-08-01

EDELWEISS-III is a direct dark matter search experiment, running 800 g heat-and-ionization cryogenic germanium detectors equipped with Full InterDigitized electrodes (FID) for the rejection of near-surface events. We report a preliminary analysis for a subset of the data (35 kg\\cdot days) as well as future prospects for low-mass WIMPs seach.

17. Using Two Different Self-Directed Search (SDS) Interpretive Materials: Implications for Career Assessment

ERIC Educational Resources Information Center

Dozier, V. Casey; Sampson, James P.; Reardon, Robert C.

2013-01-01

John Holland's Self-Directed Search (SDS) is a career assessment that consists of several booklets designed to be self-scored and self-administered. It simulates what a practitioner and an individual might do together in a career counseling session (e.g., review preferred activities and occupations; review competencies, abilities and possible…

18. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

PubMed

Choongsang Cho; Sangkeun Lee

2016-04-01

Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing. PMID:26886985

19. Approximate calculation of multispar cantilever and semicantilever wings with parallel ribs under direct and indirect loading

NASA Technical Reports Server (NTRS)

Sanger, Eugen

1932-01-01

A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.

20. Solar power satellite rectenna design study: Directional receiving elements and parallel-series combining analysis

NASA Technical Reports Server (NTRS)

Gutmann, R. J.; Borrego, J. M.

1978-01-01

Rectenna conversion efficiencies (RF to dc) approximating 85 percent were demonstrated on a small scale, clearly indicating the feasibility and potential of efficiency of microwave power to dc. The overall cost estimates of the solar power satellite indicate that the baseline rectenna subsystem will be between 25 to 40 percent of the system cost. The directional receiving elements and element extensions were studied, along with power combining evaluation and evaluation extensions.

1. Periodic Acid-Schiff Staining Parallels the Immunoreactivity Seen By Direct Immunofluorescence in Autoimmune Skin Diseases

PubMed Central

Abreu Velez, Ana Maria; Upegui Zapata, Yulieth Alexandra; Howard, Michael S

2016-01-01

Background: In many countries and laboratories, techniques such as direct immunofluorescence (DIF) are not available for the diagnosis of skin diseases. Thus, these laboratories are limited in the full diagnoses of autoimmune skin diseases, vasculitis, and rheumatologic diseases. In our experience with these diseases and the patient's skin biopsies, we have noted a positive correlation between periodic acid-Schiff (PAS) staining and immunofluorescence patterns; however, these were just empiric observations. In the current study, we aim to confirm these observations, given the concept that the majority of autoantibodies are glycoproteins and should thus be recognized by PAS staining. Aims: To compare direct immunofluorescent and PAS staining, in multiple autoimmune diseases that are known to exhibit specific direct immunofluorescent patterns. Materials and Methods: We studied multiple autoimmune skin diseases: Five cases of bullous pemphigoid, five cases of pemphigus vulgaris, ten cases of cutaneous lupus, ten cases of autoimmune vasculitis, ten cases of lichen planus (LP), and five cases of cutaneous drug reactions (including one case of erythema multiforme). In addition, we utilized 45 normal skin control specimens from plastic surgery reductions. Results: We found a 98% positive correlation between DIF and PAS staining patterns over all the disease samples. Conclusion: We recommend that laboratories without access to DIF always perform PAS staining in addition to hematoxylin and eosin (H&E) staining, for a review of the reactivity pattern. PMID:27114972

2. Multiparty controlled quantum secure direct communication based on quantum search algorithm

Kao, Shih-Hung; Hwang, Tzonelih

2013-12-01

In this study, a new controlled quantum secure direct communication (CQSDC) protocol using the quantum search algorithm as the encoding function is proposed. The proposed protocol is based on the multi-particle Greenberger-Horne-Zeilinger entangled state and the one-step quantum transmission strategy. Due to the one-step transmission of qubits, the proposed protocol can be easily extended to a multi-controller environment, and is also free from the Trojan horse attacks. The analysis shows that the use of quantum search algorithm in the construction of CQSDC appears very promising.

3. Highly flexible nearest-neighbor-search associative memory with integrated k nearest neighbor classifier, configurable parallelism and dual-storage space

An, Fengwei; Mihara, Keisuke; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

2016-04-01

VLSI-implementations are often applied to solve the high computational cost of pattern matching but have usually low flexibility for satisfying different target applications. In this paper, a digital word-parallel associative memory architecture for k nearest neighbor (KNN) search, which is one of the most basic algorithms in pattern recognition, is reported applying the squared Euclidean distance measure. The reported architecture features reconfigurable parallelism, dual-storage space to achieve a flexible number of reference vectors, and a dedicated majority vote circuit. Programmable switching circuits, located between vector components, enable scalability of the searching parallelism by configuring the reference feature-vector dimensionality. A pipelined storage with dual static-random-access-memory (SRAM) cells for each unit and an intermediate winner control circuit are designed to extend the applicability by improving the flexibility of the reference storage. A test chip in 180 nm CMOS technology, which has 32 rows, 4 elements in each row and 2-parallel 8-bit dual-components in each element, consumes altogether 61.4 mW and in particular only 11.9 mW during the reconfigurable KNN classification (at 45.58 MHz and 1.8 V).

4. Results of a direct search for the thorium-229 nuclear isomeric transition

Schneider, Christian; Jeet, Justin; Sullivan, Scott T.; Rellergert, Wade G.; Mirzadeh, Saed; Cassanho, A.; Jenssen, H. P.; Tkalya, Eugene V.; Hudson, Eric R.

2015-05-01

The nucleus of thorium-229 has an exceptionally low-energy isomeric transition in the vacuum-ultraviolet spectrum around 7 . 8 +/- 0 . 5 eV. The prospects of a laser-accessible nuclear transition are manifold but require spectroscopically resolving the transition. Our approach is a direct search using thorium-doped crystals as samples and exciting the isomeric state with vacuum-ultraviolet synchrotron radiation. In a recent experiment, we were able to search for the transition at the Advanced Light Source synchrotron, LBNL, between 7 . 3 eV and 8 . 8 eV. We found no evidence for the transition within a lifetime range of 1-2s to 2000-5600s. This result excludes large parts of the theoretically expected region. We conclude reporting on our efforts of a search using laser-generated vacuum-ultraviolet light.

5. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations

PubMed Central

Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

2014-01-01

Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

6. Advancing predictive models for particulate formation in turbulent flames via massively parallel direct numerical simulations.

PubMed

Bisetti, Fabrizio; Attili, Antonio; Pitsch, Heinz

2014-08-13

Combustion of fossil fuels is likely to continue for the near future due to the growing trends in energy consumption worldwide. The increase in efficiency and the reduction of pollutant emissions from combustion devices are pivotal to achieving meaningful levels of carbon abatement as part of the ongoing climate change efforts. Computational fluid dynamics featuring adequate combustion models will play an increasingly important role in the design of more efficient and cleaner industrial burners, internal combustion engines, and combustors for stationary power generation and aircraft propulsion. Today, turbulent combustion modelling is hindered severely by the lack of data that are accurate and sufficiently complete to assess and remedy model deficiencies effectively. In particular, the formation of pollutants is a complex, nonlinear and multi-scale process characterized by the interaction of molecular and turbulent mixing with a multitude of chemical reactions with disparate time scales. The use of direct numerical simulation (DNS) featuring a state of the art description of the underlying chemistry and physical processes has contributed greatly to combustion model development in recent years. In this paper, the analysis of the intricate evolution of soot formation in turbulent flames demonstrates how DNS databases are used to illuminate relevant physico-chemical mechanisms and to identify modelling needs. PMID:25024412

7. Development of a super-resolution optical microscope for directional dark matter search experiment

Alexandrov, A.; Asada, T.; Consiglio, L.; DAmbrosio, N.; De Lellis, G.; Di Crescenzo, A.; Di Marco, N.; Furuya, S.; Hakamata, K.; Ishikawa, M.; Katsuragawa, T.; Kuwabara, K.; Machii, S.; Naka, T.; Pupilli, F.; Sirignano, C.; Tawara, Y.; Tioukov, V.; Umemoto, A.; Yoshimoto, M.

2016-07-01

Nuclear emulsion is a perfect choice for a detector for directional DM search because of its high density and excellent position accuracy. The minimal detectable track length of a recoil nucleus in emulsion is required to be at least 100 nm, making the resolution of conventional optical microscopes insufficient to resolve them. Here we report about the R&D on a super-resolution optical microscope to be used in future directional DM search experiments with nuclear emulsion as a detector media. The microscope will be fully automatic, will use novel image acquisition and analysis techniques, will achieve the spatial resolution of the order of few tens of nm and will be capable of reconstructing recoil tracks with the length of at least 100 nm with high angular resolution.

8. Search for Coincidences in Time and Arrival Direction of Auger Data with Astrophysical Transients

SciTech Connect

Anchordoqui, Luis; Collaboration, for the Pierre Auger

2007-06-01

The data collected by the Pierre Auger Observatory are analyzed to search for coincidences between the arrival directions of high-energy cosmic rays and the positions in the sky of astrophysical transients. Special attention is directed towards gamma ray observations recorded by NASA's Swift mission, which have an angular resolution similar to that of the Auger surface detectors. In particular, we check our data for evidence of a signal associated with the giant flare that came from the soft gamma repeater 1806-20 on December 27, 2004.

9. Parallel algorithms and architectures

SciTech Connect

Albrecht, A.; Jung, H.; Mehlhorn, K.

1987-01-01

Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

10. Intrinsic neutron background of nuclear emulsions for directional Dark Matter searches

Alexandrov, A.; Asada, T.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Di Marco, N.; Di Vacri, M. L.; Furuya, S.; Galati, G.; Gentile, V.; Katsuragawa, T.; Laubenstein, M.; Lauria, A.; Loverre, P. F.; Machii, S.; Monacelli, P.; Montesi, M. C.; Naka, T.; Pupilli, F.; Rosa, G.; Sato, O.; Strolin, P.; Tioukov, V.; Umemoto, A.; Yoshimoto, M.

2016-07-01

Recent developments of the nuclear emulsion technology led to the production of films with nanometric silver halide grains suitable to track low energy nuclear recoils with submicrometric length. This improvement opens the way to a directional Dark Matter detection, thus providing an innovative and complementary approach to the on-going WIMP searches. An important background source for these searches is represented by neutron-induced nuclear recoils that can mimic the WIMP signal. In this paper we provide an estimation of the contribution to this background from the intrinsic radioactive contamination of nuclear emulsions. We also report the neutron-induced background as a function of the read-out threshold, by using a GEANT4 simulation of the nuclear emulsion, showing that it amounts to about 0.06 per year per kilogram, fully compatible with the design of a 10 kg × year exposure.

11. Intrinsic neutron background of nuclear emulsions for directional Dark Matter searches

Alexandrov, A.; Asada, T.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Di Marco, N.; Di Vacri, M. L.; Furuya, S.; Galati, G.; Gentile, V.; Katsuragawa, T.; Laubenstein, M.; Lauria, A.; Loverre, P. F.; Machii, S.; Monacelli, P.; Montesi, M. C.; Naka, T.; Pupilli, F.; Rosa, G.; Sato, O.; Strolin, P.; Tioukov, V.; Umemoto, A.; Yoshimoto, M.

2016-07-01

Recent developments of the nuclear emulsion technology led to the production of films with nanometric silver halide grains suitable to track low energy nuclear recoils with submicrometric length. This improvement opens the way to a directional Dark Matter detection, thus providing an innovative and complementary approach to the on-going WIMP searches. An important background source for these searches is represented by neutron-induced nuclear recoils that can mimic the WIMP signal. In this paper we provide an estimation of the contribution to this background from the intrinsic radioactive contamination of nuclear emulsions. We also report the neutron-induced background as a function of the read-out threshold, by using a GEANT4 simulation of the nuclear emulsion, showing that it amounts to about 0.06 per year per kilogram, fully compatible with the design of a 10 kg × year exposure.

12. Adaptive particle-based pore-level modeling of incompressible fluid flow in porous media: a direct and parallel approach

Ovaysi, S.; Piri, M.

2009-12-01

We present a three-dimensional fully dynamic parallel particle-based model for direct pore-level simulation of incompressible viscous fluid flow in disordered porous media. The model was developed from scratch and is capable of simulating flow directly in three-dimensional high-resolution microtomography images of naturally occurring or man-made porous systems. It reads the images as input where the position of the solid walls are given. The entire medium, i.e., solid and fluid, is then discretized using particles. The model is based on Moving Particle Semi-implicit (MPS) technique. We modify this technique in order to improve its stability. The model handles highly irregular fluid-solid boundaries effectively. It takes into account viscous pressure drop in addition to the gravity forces. It conserves mass and can automatically detect any false connectivity with fluid particles in the neighboring pores and throats. It includes a sophisticated algorithm to automatically split and merge particles to maintain hydraulic connectivity of extremely narrow conduits. Furthermore, it uses novel methods to handle particle inconsistencies and open boundaries. To handle the computational load, we present a fully parallel version of the model that runs on distributed memory computer clusters and exhibits excellent scalability. The model is used to simulate unsteady-state flow problems under different conditions starting from straight noncircular capillary tubes with different cross-sectional shapes, i.e., circular/elliptical, square/rectangular and triangular cross-sections. We compare the predicted dimensionless hydraulic conductances with the data available in the literature and observe an excellent agreement. We then test the scalability of our parallel model with two samples of an artificial sandstone, samples A and B, with different volumes and different distributions (non-uniform and uniform) of solid particles among the processors. An excellent linear scalability is

13. Neutralino dark matter in minimal supergravity: Direct detection versus collider searches

SciTech Connect

Baer, H.; Brhlik, M.

1998-01-01

We calculate expected event rates for direct detection of relic neutralinos as a function of parameter space of the minimal supergravity model. Numerical results are presented for the specific case of a {sup 73}Ge detector. We find significant detection rates (R{gt}0.01events/kg/day) in regions of parameter space most favored by constraints from B{r_arrow}X{sub s}{gamma} and the cosmological relic density of neutralinos. The detection rates are especially large in regions of large tan{beta}, where many conventional signals for supersymmetry at collider experiments are difficult to detect. If the parameter tan{beta} is large, then there is a significant probability that the first direct evidence for supersymmetry could come from direct detection experiments, rather than from collider searches for sparticles. {copyright} {ital 1997} {ital The American Physical Society}

14. The Impact of Transiting Planet Science on the Next Generation of Direct-Imaging Planet Searches

Carson, Joseph C.

2009-02-01

Within the next five years, a number of direct-imaging planet search instruments, like the VLT SPHERE instrument, will be coming online. To successfully carry out their programs, these instruments will rely heavily on a-priori information on planet composition, atmosphere, and evolution. Transiting planet surveys, while covering a different semi-major axis regime, have the potential to provide critical foundations for these next-generation surveys. For example, improved information on planetary evolutionary tracks may significantly impact the insights that can be drawn from direct-imaging statistical data. Other high-impact results from transiting planet science include information on mass-to-radius relationships as well as atmospheric absorption bands. The marriage of transiting planet and direct-imaging results may eventually give us the first complete picture of planet migration, multiplicity, and general evolution.

15. Applicability of preparative overpressured layer chromatography and direct bioautography in search of antibacterial chamomile compounds.

PubMed

Móricz, Agnes M; Ott, Péter G; Alberti, Agnes; Böszörményi, Andrea; Lemberkovics, Eva; Szoke, Eva; Kéry, Agnes; Mincsovics, Emil

2013-01-01

In situ sample preparation and preparative overpressured layer chromatography (OPLC) fractionation on a 0.5 mm thick adsorbent layer of chamomile flower methanol extract prepurified by conventional gravitation accelerated column chromatography were applied in searching for bioactive components. Sample cleanup in situ on the adsorbent layer subsequent to sample application was performed using mobile phase flow in the opposite direction (the input and output of the eluent was exchanged). The antibacterial effect of the fractions obtained from the stepwise gradient OPLC separation with the flow in the normal direction was evaluated by direct bioautography against two Gram-negative bacteria: the luminescence gene tagged plant pathogenic Pseudomonas syringae pv. maculicola, and the naturally luminescent marine bacterium Vibrio fischeri. The fractions having strong activity were analyzed by SPME-GC/MS and HPLC/MS/MS. Mainly essential oil components, coumarins, flavonoids, phenolic acids, and fatty acids were tentatively identified in the fractions. PMID:24645496

16. Light neutralino dark matter: direct/indirect detection and collider searches

Han, Tao; Liu, Zhen; Su, Shufang

2014-08-01

We study the neutralino being the Lightest Supersymmetric Particle (LSP) as a cold Dark Matter (DM) candidate with a mass less than 40 GeV in the framework of the Next-to-Minimal-Supersymmetric-Standard-Model (NMSSM). We find that with the current collider constraints from LEP, the Tevatron and the LHC, there are three types of light DM solutions consistent with the direct/indirect searches as well as the relic abundance considerations: ( i) A 1, H 1-funnels, ( ii) stau coannihilation and ( iii) sbottom coannihilation. Type-( i) may take place in any theory with a light scalar (or pseudo-scalar) near the LSP pair threshold; while Type-( ii) and ( iii) could occur in the framework of Minimal-Supersymmetric-Standard-Model (MSSM) as well. We present a comprehensive study on the properties of these solutions and point out their immediate relevance to the experiments of the underground direct detection such as superCDMS and LUX/LZ, and the astro-physical indirect search such as Fermi-LAT. We also find that the decays of the SM-like Higgs boson may be modified appreciably and the new decay channels to the light SUSY particles may be sizable. The new light CP-even and CP-odd Higgs bosons will decay to a pair of LSPs as well as other observable final states, leading to interesting new Higgs phenomenology at colliders. For the light sfermion searches, the signals would be very challenging to observe at the LHC given the current bounds. However, a high energy and high luminosity lepton collider, such as the ILC, would be able to fully cover these scenarios by searching for events with large missing energy plus charged tracks or displaced vertices.

17. A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets

Grossman, S.

2015-05-01

Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.

18. Drawing Parallels in Search of Educational Equity: A Multicultural Education Delegation to China Looks Outside to See Within

ERIC Educational Resources Information Center

Carjuzaa, Jioanna; Fenimore-Smith, J. Kay; Fuller, Ethlyn Davis; Howe, William A.; Kugler, Eileen; London, Arcenia P.; Ruiz, Ivette; Shin, Barbara

2008-01-01

In 2004, a professional delegation of multicultural educators visited the People's Republic of China to explore how diversity issues are addressed and how students are prepared for entry into the international workforce. The delegation, sponsored by the People to People Ambassador Programs, observed numerous parallels to the American system of…

19. Steganography in clustered-dot halftones using orientation modulation and modification of direct binary search

Chen, Yung-Yao; Hong, Sheng-Yi; Chen, Kai-Wen

2015-03-01

This paper proposes a novel message-embedded halftoning scheme that is based on orientation modulation (OM) encoding. To achieve high image quality, we employ a human visual system (HVS)-based error metric between the continuous-tone image and a data-embedded halftone, and integrate a modified direct binary search (DBS) framework into the proposed message-embedded halftoning method. The modified DBS framework ensures that the resulting data-embedded halftones have optimal image quality from the viewpoint of the HVS.

20. A Generalized Radiation Model for Human Mobility: Spatial Scale, Searching Direction and Trip Constraint

PubMed Central

Kang, Chaogui; Liu, Yu; Guo, Diansheng; Qin, Kun

2015-01-01

We generalized the recently introduced “radiation model”, as an analog to the generalization of the classic “gravity model”, to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses. PMID:26600153

1. Efficient design of direct-binary-search computer-generated holograms

SciTech Connect

Jennison, B.K.; Allebach. J.P. ); Sweeney, D.W. )

1991-04-01

Computer-generated holograms (CGH's) synthesized by the iterative direct-binary-search (DBS) algorithm yield lower reconstruction error and higher diffraction efficiency than do CGH's designed by conventional methods, but the DBS algorithm is computationally intensive. A fast algorithm for DBS is developed that recursively computes the error measure to be minimized. For complex amplitude-based error, the required computation for an L-point and modifications are considered in order to make the algorithm more efficient. An acceleration technique that attempts to increase the rate of convergence of the DBS algorithm is also investigated.

2. Fuel cell and lithium iron phosphate battery hybrid powertrain with an ultracapacitor bank using direct parallel structure

Xie, Changjun; Xu, Xinyi; Bujlo, Piotr; Shen, Di; Zhao, Hengbing; Quan, Shuhai

2015-04-01

In this study, a novel fuel cell-Li-ion battery hybrid powertrain using a direct parallel structure with an ultracapacitor bank is presented. In addition, a fuzzy logic controller is designed for the energy management of hybrid powertrain aimed at adjusting and stabilizing the DC bus voltage via a bidirectional DC/DC converter. To validate the Fuel cell-Li-ion battery-Ultracapacitor (FC-LIB-UC) hybrid powertrain and energy management strategies developed in this study, a test station powered by a 1 kW fuel cell system, a 2.8 kWh Li-ion battery pack and a 330 F/48.6 V ultracapacitor bank is designed and constructed on the basis of stand-alone module. Finally, an Urban Dynamometer Driving Schedule cycle is performed on this station and the experimental results show that: (i) the power distribution of FC system is narrowest and the power distribution of UC bank is widest during a cycle, and (ii) the FC system is controlled to satisfy the slow dynamic variation in this hybrid powertrain and the output of the LIB pack and UC bank is adjusted to meet fast dynamic load requirements. As a result, the proposed FC-LIB-UC hybrid powertrain can take full advantage of three kinds of energy sources.

3. Non-CAR resists and advanced materials for Massively Parallel E-Beam Direct Write process integration

Pourteau, Marie-Line; Servin, Isabelle; Lepinay, Kévin; Essomba, Cyrille; Dal'Zotto, Bernard; Pradelles, Jonathan; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco

2016-03-01

The emerging Massively Parallel-Electron Beam Direct Write (MP-EBDW) is an attractive high resolution high throughput lithography technology. As previously shown, Chemically Amplified Resists (CARs) meet process/integration specifications in terms of dose-to-size, resolution, contrast, and energy latitude. However, they are still limited by their line width roughness. To overcome this issue, we tested an alternative advanced non-CAR and showed it brings a substantial gain in sensitivity compared to CAR. We also implemented and assessed in-line post-lithographic treatments for roughness mitigation. For outgassing-reduction purpose, a top-coat layer is added to the total process stack. A new generation top-coat was tested and showed improved printing performances compared to the previous product, especially avoiding dark erosion: SEM cross-section showed a straight pattern profile. A spin-coatable charge dissipation layer based on conductive polyaniline has also been tested for conductivity and lithographic performances, and compatibility experiments revealed that the underlying resist type has to be carefully chosen when using this product. Finally, the Process Of Reference (POR) trilayer stack defined for 5 kV multi-e-beam lithography was successfully etched with well opened and straight patterns, and no lithography-etch bias.

4. Unravelling how βCaMKII controls the direction of plasticity at parallel fibre-Purkinje cell synapses

Pinto, Thiago M.; Schilstra, Maria J.; Steuber, Volker; Roque, Antonio C.

2015-12-01

Long-term plasticity at parallel fibre (PF)-Purkinje cell (PC) synapses is thought to mediate cerebellar motor learning. It is known that calcium-calmodulin dependent protein kinase II (CaMKII) is essential for plasticity in the cerebellum. Recently, Van Woerden et al. demonstrated that the β isoform of CaMKII regulates the bidirectional inversion of PF-PC plasticity. Because the cellular events that underlie these experimental findings are still poorly understood, our work aims at unravelling how β CaMKII controls the direction of plasticity at PF-PC synapses. We developed a bidirectional plasticity model that replicates the experimental observations by Van Woerden et al. Simulation results obtained from this model indicate the mechanisms that underlie the bidirectional inversion of cerebellar plasticity. As suggested by Van Woerden et al., the filamentous actin binding enables β CaMKII to regulate the bidirectional plasticity at PF-PC synapses. Our model suggests that the reversal of long-term plasticity in PCs is based on a combination of mechanisms that occur at different calcium concentrations.

5. Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.

PubMed

Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru

2015-01-01

Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend. PMID:26541054

6. Directed search for gravitational waves from Scorpius X-1 with initial LIGO data

2015-03-01

We present results of a search for continuously emitted gravitational radiation, directed at the brightest low-mass x-ray binary, Scorpius X-1. Our semicoherent analysis covers 10 days of LIGO S5 data ranging from 50-550 Hz, and performs an incoherent sum of coherent F -statistic power distributed amongst frequency-modulated orbital sidebands. All candidates not removed at the veto stage were found to be consistent with noise at a 1% false alarm rate. We present Bayesian 95% confidence upper limits on gravitational-wave strain amplitude using two different prior distributions: a standard one, with no a priori assumptions about the orientation of Scorpius X-1; and an angle-restricted one, using a prior derived from electromagnetic observations. Median strain upper limits of 1.3 ×10-24 and 8 ×10-25 are reported at 150 Hz for the standard and angle-restricted searches respectively. This proof-of-principle analysis was limited to a short observation time by unknown effects of accretion on the intrinsic spin frequency of the neutron star, but improves upon previous upper limits by factors of ˜1.4 for the standard, and 2.3 for the angle-restricted search at the sensitive region of the detector.

7. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

2016-01-01

Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

8. Searches for direct pair production of third generation squarks with the ATLAS detector

Pagacova, Martina

2015-05-01

Naturalness arguments for weak-scale supersymmetry favour supersymmetric partners of the third generation quarks with masses not too far from those of their Standard Model counterparts. If the masses of top and bottom squarks are below 1 TeV, the direct pair production cross-section is sufficient to produce observable signatures at the ATLAS detector and to probe various theoretical scenarios with the Large Hadron Collider (LHC) data at √s = 8 TeV. The most recent ATLAS results from searches for direct stop and sbottom pair production are presented in these proceedings. No evidence of deviations from the Standard Model expectation has been observed, and the limits have been set on the masses of the top and bottom squarks.

9. A search for parallel electric fields by observing secondary electrons and photoelectrons in the low-altitude auroral zone

NASA Technical Reports Server (NTRS)

Fung, Shing F.; Hoffman, R. A.

1991-01-01

Model calculations are performed demonstrating the effect of weak parallel electric fields on the differential spectra of the low-energy electrons observed in the inverted-V electron precipitation events in the topside ionosphere. A comparison of the altitude dependence of the observed spectra with the model calculations shows that there can be, on average, no more than a 2-V potential drop between the altitudes of 400 and 900 km, corresponding to a distributed parallel dc electric field of less than 4 microV/m under the inverted-V electron precipitation regions. Statistical results are presented on the spectral dependence of secondary electrons on the inverted-V primary beam parameters.

10. A Direct Dark Matter Search with the MAJORANA Low-Background Broad Energy Germanium Detector

It is well established that a significant portion of our Universe is comprised of invisible, non-luminous matter, commonly referred to as dark matter. The detection and characterization of this missing matter is an active area of research in cosmology and particle astrophysics. A general class of candidates for non-baryonic particle dark matter is weakly interacting massive particles (WIMPs). WIMPs emerge naturally from supersymmetry with predicted masses between 1--1000 GeV. There are many current and near-future experiments that may shed light on the nature of dark matter by directly detecting WIMP-nucleus scattering events. The MAJORANA experiment will use p-type point contact (PPC) germanium detectors as both the source and detector to search for neutrinoless double-beta decay in 76Ge. These detectors have both exceptional energy resolution and low-energy thresholds. The low-energy performance of PPC detectors, due to their low-capacitance point-contact design, makes them suitable for direct dark matter searches. As a part of the research and development efforts for the MAJORANA experiment, a custom Canberra PPC detector has been deployed at the Kimballton Underground Research Facility in Ripplemead, Virginia. This detector has been used to perform a search for low-mass (< 10 GeV) WIMP induced nuclear recoils using a 221.49 live-day exposure. It was found that events originating near the surface of the detector plague the signal region, even after all cuts. For this reason, only an upper limit on WIMP induced nuclear recoils was placed. This limit is inconsistent with several recent claims to have observed light WIMP based dark matter.

11. Quasi-steady state reduction of molecular motor-based models of directed intermittent search.

PubMed

Newby, Jay M; Bressloff, Paul C

2010-10-01

We present a quasi-steady state reduction of a linear reaction-hyperbolic master equation describing the directed intermittent search for a hidden target by a motor-driven particle moving on a one-dimensional filament track. The particle is injected at one end of the track and randomly switches between stationary search phases and mobile nonsearch phases that are biased in the anterograde direction. There is a finite possibility that the particle fails to find the target due to an absorbing boundary at the other end of the track. Such a scenario is exemplified by the motor-driven transport of vesicular cargo to synaptic targets located on the axon or dendrites of a neuron. The reduced model is described by a scalar Fokker-Planck (FP) equation, which has an additional inhomogeneous decay term that takes into account absorption by the target. The FP equation is used to compute the probability of finding the hidden target (hitting probability) and the corresponding conditional mean first passage time (MFPT) in terms of the effective drift velocity V, diffusivity D, and target absorption rate λ of the random search. The quasi-steady state reduction determines V, D, and λ in terms of the various biophysical parameters of the underlying motor transport model. We first apply our analysis to a simple 3-state model and show that our quasi-steady state reduction yields results that are in excellent agreement with Monte Carlo simulations of the full system under physiologically reasonable conditions. We then consider a more complex multiple motor model of bidirectional transport, in which opposing motors compete in a "tug-of-war", and use this to explore how ATP concentration might regulate the delivery of cargo to synaptic targets. PMID:20169417

12. Activity in V4 reflects the direction, but not the latency, of saccades during visual search.

PubMed

Gee, Angela L; Ipata, Anna E; Goldberg, Michael E

2010-10-01

We constantly make eye movements to bring objects of interest onto the fovea for more detailed processing. Activity in area V4, a prestriate visual area, is enhanced at the location corresponding to the target of an eye movement. However, the precise role of activity in V4 in relation to these saccades and the modulation of other cortical areas in the oculomotor system remains unknown. V4 could be a source of visual feature information used to select the eye movement, or alternatively, it could reflect the locus of spatial attention. To test these hypotheses, we trained monkeys on a visual search task in which they were free to move their eyes. We found that activity in area V4 reflected the direction of the upcoming saccade but did not predict the latency of the saccade in contrast to activity in the lateral intraparietal area (LIP). We suggest that the signals in V4, unlike those in LIP, are not directly involved in the generation of the saccade itself but rather are more closely linked to visual perception and attention. Although V4 and LIP have different roles in spatial attention and preparing eye movements, they likely perform complimentary processes during visual search. PMID:20610790

13. Modelling Peripheral Pre-Attention And Foveal Fixation For Search Directed Machine Vision Systems

Luckman, Adrian J.; Allinson, Nigel M.

1990-02-01

The human visual system has evolved towards a close integration of visual information processing and visual data acquisition. Fast, peripheral, pre-attentive vision uses low resolution input to direct the fixation of the fovea to features of importance in an efficient visual search pattern. Here we describe a system which emulates the multi-resolution aspect of human visual processing to provide computational efficiency in data analysis. The visual task used is the location of specific features in human faces for use in videotelephony. The feature location technique uses a Kohonen-based neural network architecture to permit learning by example. Input data is in the form of a resolution pyramid to emulate the differing modes of human vision. The system is implemented on a RISC-based microcomputer workstation with purpose-built real-time image acquisition hardware. It performs well with both familiar and unseen image data and, with refinement, could form the basis of a useable system.

14. Low-energy recoils and energy scale in liquid xenon detector for direct dark matter searches

Wang, Lu; Mei, Dongming; Cubed Collaboration

2015-04-01

Liquid xenon has been proven to be a great detector medium for the direct search of dark matter. However, in the energy region of below 10 keV, the light yield and charge production are not fully understood due to the convolution of excitation, recombination and quenching. We have already studied a recombination model to explain the physics processes involved in liquid xenon. Work is continued on the average energy expended per electron-ion pair as a function of energy based on the cross sections for different type of scattering processes. In this paper, the results will be discussed in comparison with available experimental data using Birk's Law to understand how scintillation quenching contributes to the non-linear light yield for electron recoils with energy below 10 keV in liquid xenon. This work is supported by DOE Grant DE-FG02-10ER46709 and the state of South Dakota.

15. Simulated annealing and metaheuristic for randomized priority search algorithms for the aerial refuelling parallel machine scheduling problem with due date-to-deadline windows and release times

2013-01-01

This article addresses the aerial refuelling scheduling problem (ARSP), where a set of fighter jets (jobs) with certain ready times must be refuelled from tankers (machines) by their due dates; otherwise, they reach a low fuel level (deadline) incurring a high cost. ARSP is an identical parallel machine scheduling problem with release times and due date-to-deadline windows to minimize the total weighted tardiness. A simulated annealing (SA) and metaheuristic for randomized priority search (Meta-RaPS) with the newly introduced composite dispatching rule, apparent piecewise tardiness cost with ready times (APTCR), are applied to the problem. Computational experiments compared the algorithms' solutions to optimal solutions for small problems and to each other for larger problems. To obtain optimal solutions, a mixed integer program with a piecewise weighted tardiness objective function was solved for up to 12 jobs. The results show that Meta-RaPS performs better in terms of average relative error but SA is more efficient.

16. Search for Direct Stop Production Using the Razor Variables with the CMS Experiment at the CERN LHC

Gauthier, Lucie

A search for supersymmetry in the context of direct stop production is presented using the full 19/fb dataset collected in 2012 by the Compact Muon Solenoid experiment at the Large Hadron Collider with a center of mass energy of 8 TeV. This analysis makes use of the razor kinematic variables, aimed at formulating searches for new physics as a resonance search, despite the lack of constraints from missing momentum due to new physics particles escaping the detector unseen. In the absence of a signal, upper limits on allowed cross sections are derived, resulting in excluded masses for stops and neutralinos (assumed to be the lightest supersymmetric particle).

17. The first search for sub-eV scalar fields via four-wave mixing at a quasi-parallel laser collider

Homma, Kensuke; Hasebe, Takashi; Kume, Kazuki

2014-08-01

A search for sub-eV scalar fields coupling to two photons has been performed via four-wave mixing at a quasi-parallel laser collider for the first time. The experiment demonstrates the novel approach of searching for resonantly produced sub-eV scalar fields by combining two-color laser fields in the vacuum. The aim of this paper is to provide the concrete experimental setup and the analysis method based on specific combinations of polarization states between incoming and outgoing photons, which is extendable to higher-intensity laser systems operated at high repetition rates. No significant signal of four-wave mixing was observed by combining a 0.2 μ J/0.75 ns pulse laser and a 2 mW CW laser on the same optical axis. Based on the prescription developed for this particular experimental approach, we obtained the upper limit at a confidence level of 95% on the coupling-mass relation.

18. Stable computation of search directions for near-degenerate linear programming problems

SciTech Connect

Hough, P.D.

1997-03-01

In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.

19. Multipath Separation-Direction of Arrival (MS-DOA) with Genetic Search Algorithm for HF channels

Arikan, Feza; Koroglu, Ozan; Fidan, Serdar; Arikan, Orhan; Guldogan, Mehmet B.

2009-09-01

Direction-of-Arrival (DOA) defines the estimation of arrival angles of an electromagnetic wave impinging on a set of sensors. For dispersive and time-varying HF channels, where the propagating wave also suffers from the multipath phenomena, estimation of DOA is a very challenging problem. Multipath Separation-Direction of Arrival (MS-DOA), that is developed to estimate both the arrival angles in elevation and azimuth and the incoming signals at the output of the reference antenna with very high accuracy, proves itself as a strong alternative in DOA estimation for HF channels. In MS-DOA, a linear system of equations is formed using the coefficients of the basis vector for the array output vector, the incoming signal vector and the array manifold. The angles of arrival in elevation and azimuth are obtained as the maximizers of the sum of the magnitude squares of the projection of the signal coefficients on the column space of the array manifold. In this study, alternative Genetic Search Algorithms (GA) for the maximizers of the projection sum are investigated using simulated and experimental ionospheric channel data. It is observed that GA combined with MS-DOA is a powerful alternative in online DOA estimation and can be further developed according to the channel characteristics of a specific HF link.

20. SCIENCE PARAMETRICS FOR MISSIONS TO SEARCH FOR EARTH-LIKE EXOPLANETS BY DIRECT IMAGING

SciTech Connect

Brown, Robert A.

2015-01-20

We use N{sub t} , the number of exoplanets observed in time t, as a science metric to study direct-search missions like Terrestrial Planet Finder. In our model, N has 27 parameters, divided into three categories: 2 astronomical, 7 instrumental, and 18 science-operational. For various ''27-vectors'' of those parameters chosen to explore parameter space, we compute design reference missions to estimate N{sub t} . Our treatment includes the recovery of completeness c after a search observation, for revisits, solar and antisolar avoidance, observational overhead, and follow-on spectroscopy. Our baseline 27-vector has aperture D = 16 m, inner working angle IWA = 0.039'', mission time t = 0-5 yr, occurrence probability for Earth-like exoplanets η = 0.2, and typical values for the remaining 23 parameters. For the baseline case, a typical five-year design reference mission has an input catalog of ∼4700 stars with nonzero completeness, ∼1300 unique stars observed in ∼2600 observations, of which ∼1300 are revisits, and it produces N {sub 1} ∼ 50 exoplanets after one year and N {sub 5} ∼ 130 after five years. We explore offsets from the baseline for 10 parameters. We find that N depends strongly on IWA and only weakly on D. It also depends only weakly on zodiacal light for Z < 50 zodis, end-to-end efficiency for h > 0.2, and scattered starlight for ζ < 10{sup –10}. We find that observational overheads, completeness recovery and revisits, solar and antisolar avoidance, and follow-on spectroscopy are all important factors in estimating N.

1. Science Parametrics for Missions to Search for Earth-like Exoplanets by Direct Imaging

Brown, Robert A.

2015-01-01

We use Nt , the number of exoplanets observed in time t, as a science metric to study direct-search missions like Terrestrial Planet Finder. In our model, N has 27 parameters, divided into three categories: 2 astronomical, 7 instrumental, and 18 science-operational. For various "27-vectors" of those parameters chosen to explore parameter space, we compute design reference missions to estimate Nt . Our treatment includes the recovery of completeness c after a search observation, for revisits, solar and antisolar avoidance, observational overhead, and follow-on spectroscopy. Our baseline 27-vector has aperture D = 16 m, inner working angle IWA = 0.039'', mission time t = 0-5 yr, occurrence probability for Earth-like exoplanets η = 0.2, and typical values for the remaining 23 parameters. For the baseline case, a typical five-year design reference mission has an input catalog of ~4700 stars with nonzero completeness, ~1300 unique stars observed in ~2600 observations, of which ~1300 are revisits, and it produces N 1 ~ 50 exoplanets after one year and N 5 ~ 130 after five years. We explore offsets from the baseline for 10 parameters. We find that N depends strongly on IWA and only weakly on D. It also depends only weakly on zodiacal light for Z < 50 zodis, end-to-end efficiency for h > 0.2, and scattered starlight for ζ < 10-10. We find that observational overheads, completeness recovery and revisits, solar and antisolar avoidance, and follow-on spectroscopy are all important factors in estimating N.

2. Development of a MEMS electrostatic condenser lens array for nc-Si surface electron emitters of the Massive Parallel Electron Beam Direct-Write system

Kojima, A.; Ikegami, N.; Yoshida, T.; Miyaguchi, H.; Muroyama, M.; Yoshida, S.; Totsu, K.; Koshida, N.; Esashi, M.

2016-03-01

Developments of a Micro Electro-Mechanical System (MEMS) electrostatic Condenser Lens Array (CLA) for a Massively Parallel Electron Beam Direct Write (MPEBDW) lithography system are described. The CLA converges parallel electron beams for fine patterning. The structure of the CLA was designed on a basis of analysis by a finite element method (FEM) simulation. The lens was fabricated with precise machining and assembled with a nanocrystalline silicon (nc-Si) electron emitter array as an electron source of MPEBDW. The nc-Si electron emitter has the advantage that a vertical-emitted surface electron beam can be obtained without any extractor electrodes. FEM simulation of electron optics characteristics showed that the size of the electron beam emitted from the electron emitter was reduced to 15% by a radial direction, and the divergence angle is reduced to 1/18.

3. $H \\to \\gamma\\gamma$ search and direct photon pair production differential cross section

SciTech Connect

Bu, Xuebing

2010-06-01

context of the particular fermiophobic Higgs model. The corresponding results have reached the same sensitivity as a single LEP experiement, setting a lower limit on the fermiophobic Higgs of Mhf > 102.5 GeV (Mhf > 107.5 GeV expected). We are slightly below the combined LEP limit (Mhf > 109.7 GeV). We also provide access to the Mhf > 125 GeV region which was inaccessible at LEP. During the study, we found the major and irreducible background direct γγ (DPP) production is not well modelled by the current theoretical predictions: RESBOS, DIPHOX or PYTHIA. There is ~20% theoretical uncertainty for the predicted values. Thus, for our Higgs search, we use the side-band fitting method to estimate DPP contribution directly from the data events. Furthermore, DPP production is also a significant background in searches for new phenomena, such as new heavy resonances, extra spatial dimensions, or cascade decays of heavy new particles. Thus, precise measurements of the DPP cross sections for various kinematic variables and their theoretical understanding are extremely important for future Higgs and new phenomena searches. In this thesis, we also present a precise measurement of the DPP single differential cross sections as a function of the diphoton mass, the transverse momentum of the diphoton system, the azimuthal angle between the photons, and the polar scattering angle of the photons, as well as the double differential cross sections considering the last three kinematic variables in three diphoton mass bins, using 4.2 fb-1 data. These results are the first of their kind at D0 Run II, and in fact the double differential measurements are the first of their kind at Tevatron. The results are compared with different perturbative QCD predictions and event generators.

4. Theoretical and implementational aspects of parallel-link, resolution in connection graphs

SciTech Connect

Loganantharaj, R.

1985-01-01

Resolution theorem provers are relatively slow and can generally be speeded up by using parallelism and by directing the search towards an empty clause. In this research, focus is on the application of parallelism to connection graph refutation. The presence of the complete search space during connection-graph refutations suggests the opportunity to use parallel evaluation strategies to improve the efficiency of a generally very slow process. The Pseudo links are not considered for a parallel link resolution because they stand for different copies of a clause. The different kinds of parallelism identified in connection graph refutations are: or parallelism, and parallelism, and dc parallellism. Conditions for the correctness of dc parallel connection graph refutations are shown, resulting in dcpd parallellism. The dcdp parallelism, the links which are incident to distinct clauses and edge disjoint pairs are resolved in parallel. The complexity of the problem of optimally selecting the potential parallel links is equivalent to solving the optimal graph coloring problem. Fortunately, optimal solutions to this NP-hard problem are not crucial. The author describes the parallel solution of a suboptimal graph coloring algorithm, and provides a complete set of algorithms to implement dcdp parallel link resolution on a shared memory MIMD architecture.

5. Results of a Direct Search Using Synchrotron Radiation for the Low-Energy (229)Th Nuclear Isomeric Transition.

PubMed

Jeet, Justin; Schneider, Christian; Sullivan, Scott T; Rellergert, Wade G; Mirzadeh, Saed; Cassanho, A; Jenssen, H P; Tkalya, Eugene V; Hudson, Eric R

2015-06-26

We report the results of a direct search for the (229)Th (I(π)=3/2(+)←5/2(+)) nuclear isomeric transition, performed by exposing (229)Th-doped LiSrAlF(6) crystals to tunable vacuum-ultraviolet synchrotron radiation and observing any resulting fluorescence. We also use existing nuclear physics data to establish a range of possible transition strengths for the isomeric transition. We find no evidence for the thorium nuclear transition between 7.3 eV and 8.8 eV with transition lifetime (1-2) s≲τ≲(2000-5600)  s. This measurement excludes roughly half of the favored transition search area and can be used to direct future searches. PMID:26197124

6. Banks of templates for directed searches of gravitational waves from spinning neutron stars

SciTech Connect

Pisarski, Andrzej; Jaranowski, Piotr; Pietka, Maciej

2011-02-15

We construct efficient banks of templates suitable for directed searches of almost monochromatic gravitational waves originating from spinning neutron stars in our Galaxy in data being collected by currently operating interferometric detectors. We thus assume that the position of the gravitational-wave source in the sky is known, but we do not assume that the wave's frequency and its derivatives are a priori known. In the construction we employ a simplified model of the signal with constant amplitude and phase which is a polynomial function of time. All our template banks enable usage of the fast Fourier transform algorithm in the computation of the maximum-likelihood F-statistic for nodes of the grids defining the bank. We study and employ the dependence of the grid's construction on the choice of the position of the observational interval with respect to the origin of time axis. We also study the usage of the fast Fourier transform algorithms with nonstandard frequency resolutions achieved by zero padding or folding the data. In the case of the gravitational-wave signal with one spin-down parameter included we have found grids with covering thicknesses which are only 0.1-16% larger than the thickness of the optimal 2-dimensional hexagonal covering.

7. Dark matter production from Goldstone boson interactions and implications for direct searches and dark radiation

SciTech Connect

Garcia-Cely, Camilo; Ibarra, Alejandro; Molinaro, Emiliano E-mail: alejandro.ibarra@ph.tum.de

2013-11-01

The stability of the dark matter particle could be attributed to the remnant Z{sub 2} symmetry that arises from the spontaneous breaking of a global U(1) symmetry. This plausible scenario contains a Goldstone boson which, as recently shown by Weinberg, is a strong candidate for dark radiation. We show in this paper that this Goldstone boson, together with the CP-even scalar associated to the spontaneous breaking of the global U(1) symmetry, plays a central role in the dark matter production. Besides, the mixing of the CP-even scalar with the Standard Model Higgs boson leads to novel Higgs decay channels and to interactions with nucleons, thus opening the possibility of probing this scenario at the LHC and in direct dark matter search experiments. We carefully analyze the latter possibility and we show that there are good prospects to observe a signal at the future experiments LUX and XENON1T provided the dark matter particle was produced thermally and has a mass larger than ∼ 25 GeV.

8. Radiopurity of CaWO4 crystals for direct dark matter search with CRESST and EURECA

Münster, A.; Sivers, M. v.; Angloher, G.; Bento, A.; Bucci, C.; Canonica, L.; Erb, A.; Feilitzsch, F. v.; Gorla, P.; Gütlein, A.; Hauff, D.; Jochum, J.; Kraus, H.; Lanfranchi, J.-C.; Laubenstein, M.; Loebell, J.; Ortigoza, Y.; Petricca, F.; Potzel, W.; Pröbst, F.; Puimedon, J.; Reindl, F.; Roth, S.; Rottler, K.; Sailer, C.; Schäffner, K.; Schieck, J.; Scholl, S.; Schönert, S.; Seidel, W.; Stodolsky, L.; Strandhagen, C.; Strauss, R.; Tanzke, A.; Uffinger, M.; Ulrich, A.; Usherov, I.; Wawoczny, S.; Willers, M.; Wüstrich, M.; Zöller, A.

2014-05-01

The direct dark matter search experiment CRESST uses scintillating CaWO4 single crystals as targets for possible WIMP scatterings. An intrinsic radioactive contamination of the crystals as low as possible is crucial for the sensitivity of the detectors. In the past CaWO4 crystals operated in CRESST were produced by institutes in Russia and the Ukraine. Since 2011 CaWO4 crystals have also been grown at the crystal laboratory of the Technische Universität München (TUM) to better meet the requirements of CRESST and of the future tonne-scale multi-material experiment EURECA. The radiopurity of the raw materials and of first TUM-grown crystals was measured by ultra-low background γ-spectrometry. Two TUM-grown crystals were also operated as low-temperature detectors at a test setup in the Gran Sasso underground laboratory. These measurements were used to determine the crystals' intrinsic α-activities which were compared to those of crystals produced at other institutes. The total α-activities of TUM-grown crystals as low as 1.23±0.06 mBq/kg were found to be significantly smaller than the activities of crystals grown at other institutes typically ranging between ~ 15 mBq/kg and ~ 35 mBq/kg.

9. In search for better pharmacological prophylaxis for acute mountain sickness: looking in other directions.

PubMed

Lu, H; Wang, R; Xiong, J; Xie, H; Kayser, B; Jia, Z P

2015-05-01

Despite decades of research, the exact pathogenic mechanisms underlying acute mountain sickness (AMS) are still poorly understood. This fact frustrates the search for novel pharmacological prophylaxis for AMS. The prevailing view is that AMS results from an insufficient physiological response to hypoxia and that prophylaxis should aim at stimulating the response. Starting off from the opposite hypothesis that AMS may be caused by an initial excessive response to hypoxia, we suggest that directly or indirectly blunting-specific parts of the response might provide promising research alternatives. This reasoning is based on the observations that (i) humans, once acclimatized, can climb Mt Everest experiencing arterial partial oxygen pressures (PaO2) as low as 25 mmHg without AMS symptoms; (ii) paradoxically, AMS usually develops at much higher PaO2 levels; and (iii) several biomarkers, suggesting initial activation of specific pathways at such PaO2, are correlated with AMS. Apart from looking for substances that stimulate certain hypoxia triggered effects, such as the ventilatory response to hypoxia, we suggest to also investigate pharmacological means aiming at blunting certain other specific hypoxia-activated pathways, or stimulating their agonists, in the quest for better pharmacological prophylaxis for AMS. PMID:25778288

10. Synthesis of chirped apodized fiber Bragg grating parameters using Direct Tabu Search algorithm: Application to the determination of thermo-optic and thermal expansion coefficients

Karim, Fethallah; Seddiki, Omar

2010-05-01

In this paper, Direct Tabu Search (DTS) is proposed to synthesize the physical parameters of a fiber Bragg grating (FBG) numerically from its reflection response. A reflected spectrum is being calculated by using the Transfer Matrix Method (TMM). Direct search based strategies are used to direct a tabu search. These strategies are based on a new pattern search procedure called Adaptive Pattern Search (APS). In addition, the well-known Nelder-Mead (NME) algorithm is used as a local search method at the final stage of the optimization process. Direct Tabu Search (DTS) is applied for reconstruction of a raised cosine chirped fiber Bragg grating (CFBG) and a Gaussian multi channel fiber grating. The method is then used to synthesize a CFBG from its reflectivity taken at different temperatures. It gives a good estimate of the thermal expansion coefficient and the thermo-optic coefficient of the fiber.

11. A Comparison Study of the Paper-and-Pencil, Personal Computer, and Internet Versions of Holland's Self-Directed Search

ERIC Educational Resources Information Center

Lumsden, Jill A.; Sampson, James P., Jr.; Reardon, Robert C.; Lenz, Janet G.; Peterson, Gary W.

2004-01-01

The authors examined the extent to which the Realistic, Investigative, Artistic, Social, Enterprising, and Conventional scales and 3-point codes of the Self-Directed Search may be considered statistically and practically equivalent across 3 different modes of administration: paper-and-pencil, personal computer, and Internet. Student preferences…

12. Interest Profile Elevation, Big Five Personality Traits, and Secondary Constructs on the Self-Directed Search: A Replication and Extension

ERIC Educational Resources Information Center

Bullock, Emily E.; Reardon, Robert C.

2008-01-01

The study used the Self-Directed Search (SDS) and the NEO-FFI to explore profile elevation, four secondary constructs, and the Big Five personality factors in a sample of college students in a career course. Regression model results showed that openness, conscientiousness, differentiation high-low, differentiation Iachan, and consistency accounted…

13. The Influence of Career Indecision on the Strong Interest Inventory and the Self-Directed Search: A Pilot Study.

ERIC Educational Resources Information Center

Rowell, R. Kevin

A pilot study was conducted with 48 adults to determine if career indecision/dissatisfaction as indicated by flat Strong Interest Inventory (SII) (L. Harmon, J. Hansen, F. Borgen, and A. Hammer, 1994) profiles corresponded with flat profiles on the Self-Directed Search (SDS) and to determine if indecision affected scores on SII Personal Style…

14. Using the Self-Directed Search in Research: Selecting a Representative Pool of Items to Measure Vocational Interests

ERIC Educational Resources Information Center

Poitras, Sarah-Caroline; Guay, Frederic; Ratelle, Catherine F.

2012-01-01

Using Item Response Theory (IRT) and Confirmatory Factor Analysis (CFA), the goal of this study was to select a reduced pool of items from the French Canadian version of the Self-Directed Search--Activities Section (Holland, Fritzsche, & Powell, 1994). Two studies were conducted. Results of Study 1, involving 727 French Canadian students, showed…

15. Comparison of Self-Scoring Error Rate for SDS (Self Directed Search) (1970) and the Revised SDS (1977).

ERIC Educational Resources Information Center

Price, Gary E.; And Others

A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

16. WEIRD : Wide orbit Exoplanet search with InfraRed Direct imaging

Baron, Frédérique; Artigau, Etienne; Rameau, Julien; Lafrenière, David; Albert, Loic; Naud, Marie-Eve; Gagné, Jonathan; Malo, Lison; Doyon, Rene; Beichman, Charles; Delorme, Philippe; Janson, Markus

2015-12-01

We currently do not know what does the emission spectrum of a young 1 Jupiter-mass planet look like, as no such object has yet been directly imaged. Arguably, the most useful Jupiter-mass planet would be one that is bound to a star of known age, distance and metallicity but which has an orbit large enough (100-5000 UA) that it can be studied as an "isolated" object. We are therefore searching for the most extreme planetary systems. We are currently gathering a large dataset to try to identify such objects through deep [3.6] and [4.5] imaging from SPITZER and deep seeing-limited J (with Flamingos 2 and WIRCam) and z imaging (with GMOS-S and MegaCam) of all 181 known confirmed members of a known young association (<120 Myr) within 70pc of the Sun. Our study will reveal distant planetary companions, over the reveal distant PMCs up to 5000 AU. AU separation range, through their distinctively red z-J and [4.5]-[3.6] colors. The sensitivity limits of our combined Spitzer+ground-based program will allow detection of planets with masses as low as 1 Mjup with very low contamination rates. Here we present some preliminary results of our survey. This approach is unique in the community and will give us an overview of the architecture of the outer part of planetary systems that were never probed before. Our survey will provide benchmark young Saturn and Jupiter for imaging and spectroscopy with the JWST

17. Diagnostic Assessment of the Difficulty Using Direct Policy Search in Many-Objective Reservoir Control

Zatarain-Salazar, J.; Reed, P. M.; Herman, J. D.; Giuliani, M.; Castelletti, A.

2014-12-01

Globally reservoir operations provide fundamental services to water supply, energy generation, recreation, and ecosystems. The pressures of expanding populations, climate change, and increased energy demands are motivating a significant investment in re-operationalizing existing reservoirs or defining operations for new reservoirs. Recent work has highlighted the potential benefits of exploiting recent advances in many-objective optimization and direct policy search (DPS) to aid in addressing these systems' multi-sector demand tradeoffs. This study contributes to a comprehensive diagnostic assessment of multi-objective evolutionary optimization algorithms (MOEAs) efficiency, effectiveness, reliability, and controllability when supporting DPS for the Conowingo dam in the Lower Susquehanna River Basin. The Lower Susquehanna River is an interstate water body that has been subject to intensive water management efforts due to the system's competing demands from urban water supply, atomic power plant cooling, hydropower production, and federally regulated environmental flows. Seven benchmark and state-of-the-art MOEAs are tested on deterministic and stochastic instances of the Susquehanna test case. In the deterministic formulation, the operating objectives are evaluated over the historical realization of the hydroclimatic variables (i.e., inflows and evaporation rates). In the stochastic formulation, the same objectives are instead evaluated over an ensemble of stochastic inflows and evaporation rates realizations. The algorithms are evaluated in their ability to support DPS in discovering reservoir operations that compose the tradeoffs for six multi-sector performance objectives with thirty-two decision variables. Our diagnostic results highlight that many-objective DPS is very challenging for modern MOEAs and that epsilon dominance is critical for attaining high levels of performance. Epsilon dominance algorithms epsilon-MOEA, epsilon-NSGAII and the auto adaptive Borg

18. Relative scintillation efficiency of liquid xenon in the XENON10 direct dark matter search

Manzur, Angel

There is almost universal agreement that most of the mass in the Universe consists of dark matter. Many lines of reasoning suggest that the dark matter consists of a weakly interactive massive particle (WIMP) with mass ranging from 10 GeV/c 2 to a few TeV/c 2 . Today, numerous experiments aim for direct or indirect dark matter detection. XENON10 is a direct detection experiment using a xenon dual phase time projection chamber. Particles interacting with xenon will create a scintillation signal ( S 1) and ionization. The charge produced is extracted into the gas phase and converted into a proportional scintillation light ( S 2), with an external electric field. The dominant background, b particles and g rays, will undergo an electron recoil (ER) interaction, while WIMPs and neutrons will undergo a nuclear recoil (NR) interaction. Event-by-event discrimination of background signals is based on log 10 ( S 2/ S 1) NR < log 10 ( S 2/ S 1) ER and the 3-D position reconstruction. In 2006 the XENON10 detector started underground operations at laboratorio Nazionali del Gran Sasso in Italy. After 6 months of operations, totaling 58.6 live days and 5.4 kg of fiducial mass, XENON10 set the best upper limits at the time. Finding a spin- independent WIMP-nucleon cross-section s h = 8.8 × 10^-44 cm 2 and a spin- dependent WIMP-neutron cross-section s h = 1.0 × 10^-38 cm 2 for a WIMP mass of 100 GeV/c 2 (90% C.L.). In this work I give an overview of the dark matter evidence and review the requirements for a dark matter search. In particular I discuss the XENON10 detector, deployment, operation, calibrations, analysis and WIMP-nucleon cross- section limits. Finally, I present our latest results for the relative scintillation efficiency ([Special characters omitted.] ) for nuclear recoils in liquid xenon, which was the biggest source of uncertainty in the XENON10 limit. This quantity is essential to determine the nuclear energy scale and to determine the WIMP-nucleon cross

19. High Efficiency Bi-Directional DC-DC Converter With ZVS-ZCS Applied For Parallel Active Filtering

Romero, V.; Soto, A.

2011-10-01

In space missions, it is becoming more and more common to have strict EMC requirements to be met. Coping with this is a challenge for all those instruments and subsystems implementing AC loads. In particular, the driving of motors is one of the highest challenges due to the low frequency and high amplitude of the emissions. The driving of these motors without exceeding typical EMC levels implies adding an active filter at its input. Passive filtering approach is not useful due to bulk components required to filter such low frequencies. The aim of this paper is to show a parallel active filtering solution that implements significant advantages compared to other classical approaches in terms of mass and efficiency.

20. Poiseuille flow and thermal transpiration of a rarefied gas between parallel plates II: effect of nonuniform surface properties in the longitudinal direction

Doi, Toshiyuki

2015-12-01

Poiseuille flow and thermal transpiration of a rarefied gas between two parallel plates are studied for the situation that one of the walls is a Maxwell-type boundary with a periodic distribution of the accommodation coefficient in the longitudinal direction. The flow behavior is studied numerically based on the Bhatnager-Gross-Krook-Welander model of the Boltzmann equation. The solution is sought in a superposition of a linear and a periodic functions in the longitudinal coordinate. The numerical solution is provided over a wide range of the mean free path and the parameters characterizing the distribution of the accommodation coefficient. Due to the nonuniform surface properties in the longitudinal direction, the flow is nonparallel, and a deviation in the pressure and the temperature of the gas from those of the conventional parallel flow is observed. An energy transfer between the gas and the walls arises. The mass flow rate of the gas is approximated by a formula consisting of the data of one-dimensional flows; however, a non-negligible disagreement is observed in Poiseuille flow when the amplitude of the variation of the accommodation coefficient is sufficiently large. The validity of the present approach is confirmed by a direct numerical analysis of a flow through a long channel.

1. Search for patterns by combining cosmic-ray energy and arrival directions at the Pierre Auger Observatory

DOE PAGESBeta

Aab, Alexander

2015-06-20

Energy-dependent patterns in the arrival directions of cosmic rays are searched for using data of the Pierre Auger Observatory. We investigate local regions around the highest-energy cosmic rays with E ≥ 6×1019 eV by analyzing cosmic rays with energies above E ≥ 5×1018 eV arriving within an angular separation of approximately 15°. We characterize the energy distributions inside these regions by two independent methods, one searching for angular dependence of energy-energy correlations and one searching for collimation of energy along the local system of principal axes of the energy distribution. No significant patterns are found with this analysis. As amore » result, the comparison of these measurements with astrophysical scenarios can therefore be used to obtain constraints on related model parameters such as strength of cosmic-ray deflection and density of point sources.« less

2. Search for patterns by combining cosmic-ray energy and arrival directions at the Pierre Auger Observatory

SciTech Connect

Aab, Alexander

2015-06-20

Energy-dependent patterns in the arrival directions of cosmic rays are searched for using data of the Pierre Auger Observatory. We investigate local regions around the highest-energy cosmic rays with E ≥ 6×1019 eV by analyzing cosmic rays with energies above E ≥ 5×1018 eV arriving within an angular separation of approximately 15°. We characterize the energy distributions inside these regions by two independent methods, one searching for angular dependence of energy-energy correlations and one searching for collimation of energy along the local system of principal axes of the energy distribution. No significant patterns are found with this analysis. As a result, the comparison of these measurements with astrophysical scenarios can therefore be used to obtain constraints on related model parameters such as strength of cosmic-ray deflection and density of point sources.

3. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

SciTech Connect

Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

2006-08-01

We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

4. A parallelized binary search tree

Technology Transfer Automated Retrieval System (TEKTRAN)

PTTRNFNDR is an unsupervised statistical learning algorithm that detects patterns in DNA sequences, protein sequences, or any natural language texts that can be decomposed into letters of a finite alphabet. PTTRNFNDR performs complex mathematical computations and its processing time increases when i...

5. On the beam direction search space in computerized non-coplanar beam angle optimization for IMRT—prostate SBRT

Rossi, Linda; Breedveld, Sebastiaan; Heijmen, Ben J. M.; Voet, Peter W. J.; Lanconelli, Nico; Aluwini, Shafak

2012-09-01

In a recent paper, we have published a new algorithm, designated ‘iCycle’, for fully automated multi-criterial optimization of beam angles and intensity profiles. In this study, we have used this algorithm to investigate the relationship between plan quality and the extent of the beam direction search space, i.e. the set of candidate beam directions that may be selected for generating an optimal plan. For a group of ten prostate cancer patients, optimal IMRT plans were made for stereotactic body radiation therapy (SBRT), mimicking high dose rate brachytherapy dosimetry. Plans were generated for five different beam direction input sets: a coplanar (CP) set and four non-coplanar (NCP) sets. For CP treatments, the search space consisted of 72 orientations (5° separations). The NCP CyberKnife (CK) space contained all directions available in the robotic CK treatment unit. The fully non-coplanar (F-NCP) set facilitated the highest possible degree of freedom in selecting optimal directions. CK+ and CK++ were subsets of F-NCP to investigate some aspects of the CK space. For each input set, plans were generated with up to 30 selected beam directions. Generated plans were clinically acceptable, according to an assessment of our clinicians. Convergence in plan quality occurred only after around 20 included beams. For individual patients, variations in PTV dose delivery between the five generated plans were minimal, as aimed for (average spread in V95: 0.4%). This allowed plan comparisons based on organ at risk (OAR) doses, with the rectum considered most important. Plans generated with the NCP search spaces had improved OAR sparing compared to the CP search space, especially for the rectum. OAR sparing was best with the F-NCP, with reductions in rectum DMean, V40Gy, V60Gy and D2% compared to CP of 25%, 35%, 37% and 8%, respectively. Reduced rectum sparing with the CK search space compared to F-NCP could be largely compensated by expanding CK with beams with relatively

6. Progression from South-Directed to Orogen-Parallel Mid-Crustal Flow on the Southern Margin of the Tibetan Plateau, Ama Drime Massif, Tibet

Jessup, M. J.; Cottle, J. M.; Newell, D. L.; Berger, A. L.; Spotila, J. A.

2008-12-01

In the South Tibetan Himalaya, two major detachment systems are exposed in the Ama Drime and Mount Everest Massifs. These structures represent a fundamental shift in the dynamics of the Himalayan orogen, recording a progression from south-directed to orogen-parallel mid-crustal flow and exhumation. The South Tibetan detachment system (STDS) accommodated exhumation of the Greater Himalayan series (GHS) until the Middle Miocene. A relatively narrow mylonite zone that progressed into a brittle detachment accommodated exhumation of the GHS. Northward, in the down-dip direction (Dzakaa Chu and Doya La), a 1-km-wide distributed zone of deformation that lacks a foliation-parallel brittle detachment characterizes the STDS. Leucogranites in the footwall of the STDS range between 17-18 Ma. Previously published 40Ar/39Ar ages suggest that movement on the STDS ended by ~ 16 Ma in Rongbuk Valley and ~ 13 Ma near Dinggye. This once continuous section of the STDS is displaced by the trans- Himalayan Ama Drime Massif and Xainza-Dinggye graben. Two oppositely dipping normal faults and shear zones that bound the Ama Drime Massif record orogen-parallel extension. During exhumation, deformation was partitioned into relatively narrow (100-300-m-thick) mylonite zones that progressed into brittle faults/detachments, which offset Quaternary deposits. U(-Th-)Pb geochronology of mafic lenses suggests that the core of the ADM reached granulite facies at ~ 15 Ma. Leucogranites in the footwall of the detachment faults range between 12-11 Ma: significantly younger than those related to movement on the STDS. Previously published 40Ar/39Ar ages from the eastern limb of the Ama Drime Massif suggest that exhumation progressed into the footwall of the Nyüonno detachment between ~ 13-10 Ma. (U-Th)/He apatite ages record a minimum exhumation rate of ~ 1mm/yr between 1.5-3.0 Ma that was enhanced by focused denudation in the trans-Himalayan Arun River gorge. Together these bracket the timing (~ 12 Ma

7. THE DEPENDENCE OF VISUAL SCANNING PERFORMANCE ON SEARCH DIRECTION AND DIFFICULTY

PubMed Central

Phillips, Matthew H.; Edelman, Jay A.

2009-01-01

Phillips & Edelman (2008) presented evidence that performance variability in a visual scanning task depended on oculomotor variables related to saccade amplitude rather than fixation duration, and that saccade-related metrics reflected perceptual span. Here, we extend these results by showing that even for extremely difficult searches trial-to-trial performance variability still depends on saccade-related metrics and not fixation duration. We also show that scanning speed is faster for horizontal than for vertical searches, and that these differences derive again from differences in saccade-based metrics and not from differences in fixation duration. We find perceptual span to be larger for horizontal than vertical searches, and approximately symmetric about the line of gaze. PMID:18640144

8. Neutron beam tests of CsI(Na) and CaF2(Eu) crystals for dark matter direct search

Guo, C.; Ma, X. H.; Wang, Z. M.; Bao, J.; Dai, C. J.; Guan, M. Y.; Liu, J. C.; Li, Z. H.; Ren, J.; Ruan, X. C.; Yang, C. G.; Yu, Z. Y.; Zhong, W. L.; Huerta, C.

2016-05-01

In recent decades, inorganic crystals have been widely used in dark matter direct search experiments. To contribute to the understanding of the capabilities of CsI(Na) and CaF2(Eu) crystals, a mono-energetic neutron beam is utilized to study the properties of nuclear recoils, which are expected to be similar to signals of dark matter direct detection. The quenching factor of nuclear recoils in CsI(Na) and CaF2Eu, as well as an improved discrimination factor between nuclear recoils and γ backgrounds in CsI(Na), are reported.

9. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

NASA Technical Reports Server (NTRS)

Willsky, A. S.

1976-01-01

A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

10. A Search for Institutional Distinctiveness. New Directions for Community Colleges, Number 65.

ERIC Educational Resources Information Center

Townsend, Barbara K., Ed.

1989-01-01

The essays in this collection argue that community colleges have much to gain by seeking out and maintaining positive recognition of the features that distinguish them from other colleges in the region and state. In addition, the sourcebook contains articles discussing the process of conducting a search for institutional distinctiveness and ways…

11. Evaluation of fault-normal/fault-parallel directions rotated ground motions for response history analysis of an instrumented six-story building

USGS Publications Warehouse

Kalkan, Erol; Kwong, Neal S.

2012-01-01

According to regulatory building codes in United States (for example, 2010 California Building Code), at least two horizontal ground-motion components are required for three-dimensional (3D) response history analysis (RHA) of buildings. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHA analyses should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak responses of engineering demand parameters (EDPs) were obtained for rotation angles ranging from 0° through 180° for evaluating the FN/FP directions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

12. Adolescents' use of sexually explicit Internet material and their sexual attitudes and behavior: Parallel development and directional effects.

PubMed

Doornwaard, Suzan M; Bickham, David S; Rich, Michael; ter Bogt, Tom F M; van den Eijnden, Regina J J M

2015-10-01

Although research has repeatedly demonstrated that adolescents' use of sexually explicit Internet material (SEIM) is related to their endorsement of permissive sexual attitudes and their experience with sexual behavior, it is not clear how linkages between these constructs unfold over time. This study combined 2 types of longitudinal modeling, mean-level development and cross-lagged panel modeling, to examine (a) developmental patterns in adolescents' SEIM use, permissive sexual attitudes, and experience with sexual behavior, as well as whether these developments are related; and (b) longitudinal directionality of associations between SEIM use on the 1 hand and permissive sexual attitudes and sexual behavior on the other hand. We used 4-wave longitudinal data from 1,132 7th through 10th grade Dutch adolescents (M(age) T1 = 13.95; 52.7% boys) and estimated multigroup models to test for moderation by gender. Mean-level developmental trajectories showed that boys occasionally and increasingly used SEIM over the 18-month study period, which co-occurred with increases in their permissive attitudes and their experience with sexual behavior. Cross-lagged panel models revealed unidirectional effects from boys' SEIM use on their subsequent endorsement of permissive attitudes, but no consistent directional effects between their SEIM use and sexual behavior. Girls showed a similar pattern of increases in experience with sexual behavior, but their SEIM use was consistently low and their endorsement of permissive sexual attitudes decreased over the 18-month study period. In contrast to boys, girls' SEIM use was not longitudinally related to their sexual attitudes and behavior. Theoretical and practical implications of these gender-specific findings are discussed. (PsycINFO Database Record PMID:26376287

13. Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. Users manual - V.3.0

SciTech Connect

Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.

1996-10-01

Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.

14. Direct search for charged higgs bosons in decays of top quarks.

PubMed

Abazov, V M; Abbott, B; Abdesselam, A; Abolins, M; Abramov, V; Acharya, B S; Adams, D L; Adams, M; Ahmed, S N; Alexeev, G D; Alves, G A; Amos, N; Anderson, E W; Baarmand, M M; Babintsev, V V; Babukhadia, L; Bacon, T C; Baden, A; Baldin, B; Balm, P W; Banerjee, S; Barberis, E; Baringer, P; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Bean, A; Begel, M; Belyaev, A; Beri, S B; Bernardi, G; Bertram, I; Besson, A; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Bhattacharjee, M; Blazey, G; Blessing, S; Boehnlein, A; Bojko, N I; Borcherding, F; Bos, K; Brandt, A; Breedon, R; Briskin, G; Brock, R; Brooijmans, G; Bross, A; Buchholz, D; Buehler, M; Buescher, V; Burtovoi, V S; Butler, J M; Canelli, F; Carvalho, W; Casey, D; Casilum, Z; Castilla-Valdez, H; Chakraborty, D; Chan, K M; Chekulaev, S V; Cho, D K; Choi, S; Chopra, S; Christenson, J H; Chung, M; Claes, D; Clark, A R; Cochran, J; Coney, L; Connolly, B; Cooper, W E; Coppage, D; Cummings, M A C; Cutts, D; Davis, G A; Davis, K; De, K; de Jong, S J; Del Signore, K; Demarteau, M; Demina, R; Demine, P; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Di Loreto, G; Doulas, S; Draper, P; Ducros, Y; Dudko, L V; Duensing, S; Duflot, L; Dugad, S R; Dyshkant, A; Edmunds, D; Ellison, J; Elvira, V D; Engelmann, R; Eno, S; Eppley, G; Ermolov, P; Eroshin, O V; Estrada, J; Evans, H; Evdokimov, V N; Fahland, T; Feher, S; Fein, D; Ferbel, T; Filthaut, F; Fisk, H E; Fisyak, Y; Flattum, E; Fleuret, F; Fortner, M; Frame, K C; Fuess, S; Gallas, E; Galyaev, A N; Gao, M; Gavrilov, V; Genik, R J; Genser, K; Gerber, C E; Gershtein, Y; Gilmartin, R; Ginther, G; Gómez, B; Gómez, G; Goncharov, P I; González Solís, J L; Gordon, H; Goss, L T; Gounder, K; Goussiou, A; Graf, N; Graham, G; Grannis, P D; Green, J A; Greenlee, H; Grinstein, S; Groer, L; Grünendahl, S; Gupta, A; Gurzhiev, S N; Gutierrez, G; Gutierrez, P; Hadley, N J; Haggerty, H; Hagopian, S; Hagopian, V; Hall, R E; Hanlet, P; Hansen, S; Hauptman, J M; Hays, C; Hebert, C; Hedin, D; Heinson, A P; Heintz, U; Heuring, T; Hildreth, M D; Hirosky, R; Hobbs, J D; Hoeneisen, B; Huang, Y; Illingworth, R; Ito, A S; Jaffré, M; Jain, S; Jesik, R; Johns, K; Johnson, M; Jonckheere, A; Jones, M; Jöstlein, H; Juste, A; Kahn, S; Kajfasz, E; Kalinin, A M; Karmanov, D; Karmgard, D; Kehoe, R; Kharchilava, A; Kim, S K; Klima, B; Knuteson, B; Ko, W; Kohli, J M; Kostritskiy, A V; Kotcher, J; Kotwal, A V; Kozelov, A V; Kozlovsky, E A; Krane, J; Krishnaswamy, M R; Krivkova, P; Krzywdzinski, S; Kubantsev, M; Kuleshov, S; Kulik, Y; Kunori, S; Kupco, A; Kuznetsov, V E; Landsberg, G; Leflat, A; Leggett, C; Lehner, F; Li, J; Li, Q Z; Lima, J G R; Lincoln, D; Linn, S L; Linnemann, J; Lipton, R; Lucotte, A; Lueking, L; Lundstedt, C; Luo, C; Maciel, A K A; Madaras, R J; Malyshev, V L; Manankov, V; Mao, H S; Marshall, T; Martin, M I; Martin, R D; Mauritz, K M; May, B; Mayorov, A A; McCarthy, R; McDonald, J; McMahon, T; Melanson, H L; Merkin, M; Merritt, K W; Miao, C; Miettinen, H; Mihalcea, D; Mishra, C S; Mokhov, N; Mondal, N K; Montgomery, H E; Moore, R W; Mostafa, M; da Motta, H; Nagy, E; Nang, F; Narain, M; Narasimham, V S; Neal, H A; Negret, J P; Negroni, S; Nunnemann, T; O'Neil, D; Oguri, V; Olivier, B; Oshima, N; Padley, P; Pan, L J; Papageorgiou, K; Para, A; Parashar, N; Partridge, R; Parua, N; Paterno, M; Patwa, A; Pawlik, B; Perkins, J; Peters, M; Peters, O; Pétroff, P; Piegaia, R; Piekarz, H; Pope, B G; Popkov, E; Prosper, H B; Protopopescu, S; Qian, J; Raja, R; Rajagopalan, S; Ramberg, E; Rapidis, P A; Reay, N W; Reucroft, S; Rha, J; Ridel, M; Rijssenbeek, M; Rockwell, T; Roco, M; Rubinov, P; Ruchti, R; Rutherfoord, J; Sabirov, B M; Santoro, A; Sawyer, L; Schamberger, R D; Schellman, H; Schwartzman, A; Sen, N; Shabalina, E; Shivpuri, R K; Shpakov, D; Shupe, M; Sidwell, R A; Simak, V; Singh, H; Singh, J B; Sirotenko, V; Slattery, P; Smith, E; Smith, R P; Snihur, R; Snow, G R; Snow, J; Snyder, S; Solomon, J; Sorín, V; Sosebee, M; Sotnikova, N; Soustruznik, K; Souza, M; Stanton, N R; Steinbrück, G; Stephens, R W; Stichelbaut, F; Stoker, D; Stolin, V; Stoyanova, D A; Strauss, M; Strovink, M; Stutte, L; Sznajder, A; Taylor, W; Tentindo-Repond, S; Tripathi, S M; Trippe, T G; Turcot, A S; Tuts, P M; van Gemmeren, P; Vaniev, V; Van Kooten, R; Varelas, N; Vertogradov, L S; Volkov, A A; Vorobiev, A P; Wahl, H D; Wang, H; Wang, Z-M; Warchol, J; Watts, G; Wayne, M; Weerts, H; White, A; White, J T; Whiteson, D; Wightman, J A; Wijngaarden, D A; Willis, S; Wimpenny, S J; Womersley, J; Wood, D R; Yamada, R; Yamin, P; Yasuda, T; Yatsunenko, Y A; Yip, K; Youssef, S; Yu, J; Yu, Z; Zanabria, M; Zheng, H; Zhou, Z; Zielinski, M; Zieminska, D; Zieminski, A; Zutshi, V; Zverev, E G; Zylberstejn, A

2002-04-15

We present a search for charged Higgs bosons in decays of pair-produced top quarks in pp collisions at sqrt[s] = 1.8 TeV recorded by the D0 detector at the Fermilab Tevatron collider. With no evidence for signal, we exclude most regions of the ( M(H+/-),tan(beta)) parameter space where the decay t--> H(+)b has a branching fraction >0.36 and B(H+/--->tau(nu)(tau)) is large. PMID:11955191

15. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm

Majumdar, Satya N.

2003-08-01

We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

16. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

Cheng, Jun; Zhang, Jun; Tian, Jinwen

2015-12-01

Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

17. A Massively Parallel Hybrid Dusty-Gasdynamics and Kinetic Direct Simulation Monte Carlo Model for Planetary Applications

NASA Technical Reports Server (NTRS)

Combi, Michael R.

2004-01-01

In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the

18. An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies

Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.

2016-09-01

A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.

19. Feasibility study of online tuning of the luminosity in a circular collider with the robust conjugate direction search method

Ji, Hong-Fei; Jiao, Yi; Wang, Sheng; Ji, Da-Heng; Yu, Cheng-Hui; Zhang, Yuan; Huang, Xiao-Biao

2015-12-01

The robust conjugate direction search (RCDS) method has high tolerance to noise in beam experiments. It has been demonstrated that this method can be used to optimize the machine performance of a light source online. In our study, taking BEPCII as an example, the feasibility of online tuning of the luminosity in a circular collider is explored, through numerical simulation and preliminary online experiments. It is shown that the luminosity that is artificially decreased by a deviation of beam orbital offset from optimal trajectory can be recovered with this method. Supported by National Natural Science Foundation of China (11475202, 11405187) and Youth Innovation Promotion Association of Chinese Academy of Sciences (2015009)

20. Direct dark matter search by annual modulation in XMASS-I

Abe, K.; Hiraide, K.; Ichimura, K.; Kishimoto, Y.; Kobayashi, K.; Kobayashi, M.; Moriyama, S.; Nakahata, M.; Norita, T.; Ogawa, H.; Sekiya, H.; Takachio, O.; Takeda, A.; Yamashita, M.; Yang, B. S.; Kim, N. Y.; Kim, Y. D.; Tasaka, S.; Fushimi, K.; Liu, J.; Martens, K.; Suzuki, Y.; Xu, B. D.; Fujita, R.; Hosokawa, K.; Miuchi, K.; Onishi, Y.; Oka, N.; Takeuchi, Y.; Kim, Y. H.; Lee, J. S.; Lee, K. B.; Lee, M. K.; Fukuda, Y.; Itow, Y.; Kegasa, R.; Kobayashi, K.; Masuda, K.; Takiya, H.; Nishijima, K.; Nakamura, S.

2016-08-01

A search for dark matter was conducted by looking for an annual modulation signal due to the Earth's rotation around the Sun using XMASS, a single phase liquid xenon detector. The data used for this analysis was 359.2 live days times 832 kg of exposure accumulated between November 2013 and March 2015. When we assume Weakly Interacting Massive Particle (WIMP) dark matter elastically scattering on the target nuclei, the exclusion upper limit of the WIMP-nucleon cross section 4.3 ×10-41 cm2 at 8 GeV/c2 was obtained and we exclude almost all the DAMA/LIBRA allowed region in the 6 to 16 GeV/c2 range at ∼10-40 cm2. The result of a simple modulation analysis, without assuming any specific dark matter model but including electron/γ events, showed a slight negative amplitude. The p-values obtained with two independent analyses are 0.014 and 0.068 for null hypothesis, respectively. We obtained 90% C.L. upper bounds that can be used to test various models. This is the first extensive annual modulation search probing this region with an exposure comparable to DAMA/LIBRA.

1. High Resolution Direction of Arrival (DOA) Estimation Based on Improved Orthogonal Matching Pursuit (OMP) Algorithm by Iterative Local Searching

PubMed Central

Wang, Wenyi; Wu, Renbiao

2013-01-01

DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

2. High resolution direction of arrival (DOA) estimation based on improved orthogonal matching pursuit (OMP) algorithm by iterative local searching.

PubMed

Wang, Wenyi; Wu, Renbiao

2013-01-01

DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

3. Statistical learning modulates the direction of the first head movement in a large-scale search task.

PubMed

Won, Bo-Yeong; Lee, Hyejin J; Jiang, Yuhong V

2015-10-01

Foraging and search tasks in everyday activities are often performed in large, open spaces, necessitating head and body movements. Such activities are rarely studied in the laboratory, leaving important questions unanswered regarding the role of attention in large-scale tasks. Here we examined the guidance of visual attention by statistical learning in a large-scale, outdoor environment. We used the orientation of the first head movement as a proxy for spatial attention and examined its correspondence with reaction time (RT). Participants wore a lightweight camera on a baseball cap while searching for a coin on the concrete floor of a 64-m(2) outdoor space. We coded the direction of the first head movement at the start of a trial. The results showed that the first head movement was highly sensitive to the location probability of the coin and demonstrated more rapid adjustment to changes in environmental statistics than RTs did. Because the first head movement occurred ten times faster than the search RT, these results show that visual statistical learning affected attentional orienting early in large-scale tasks. PMID:26160317

4. Search for signatures of magnetically-induced alignment in the arrival directions measured by the Pierre Auger Observatory

SciTech Connect

Abreu, P.; Aglietta, M.; Ahn, E.J.; Albuquerque, I.F.M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Ambrosio, M.; /Naples U. /INFN, Naples /Nijmegen U., IMAPP

2011-11-01

We present the results of an analysis of data recorded at the Pierre Auger Observatory in which we search for groups of directionally-aligned events (or ''multiplets'') which exhibit a correlation between arrival direction and the inverse of the energy. These signatures are expected from sets of events coming from the same source after having been deflected by intervening coherent magnetic fields. The observation of several events from the same source would open the possibility to accurately reconstruct the position of the source and also measure the integral of the component of the magnetic field orthogonal to the trajectory of the cosmic rays. We describe the largest multiplets found and compute the probability that they appeared by chance from an isotropic distribution. We find no statistically significant evidence for the presence of multiplets arising from magnetic deflections in the present data.

5. Development of ballistic hot electron emitter and its applications to parallel processing: active-matrix massive direct-write lithography in vacuum and thin films deposition in solutions

Koshida, N.; Kojima, A.; Ikegami, N.; Suda, R.; Yagi, M.; Shirakashi, J.; Yoshida, T.; Miyaguchi, H.; Muroyama, M.; Nishino, H.; Yoshida, S.; Sugata, M.; Totsu, K.; Esashi, M.

2015-03-01

Making the best use of the characteristic features in nanocrystalline Si (nc-Si) ballistic hot electron source, the alternative lithographic technology is presented based on the two approaches: physical excitation in vacuum and chemical reduction in solutions. The nc-Si cold cathode is a kind of metal-insulator-semiconductor (MIS) diode, composed of a thin metal film, an nc-Si layer, an n+-Si substrate, and an ohmic back contact. Under a biased condition, energetic electrons are uniformly and directionally emitted through the thin surface electrodes. In vacuum, this emitter is available for active-matrix drive massive parallel lithography. Arrayed 100×100 emitters (each size: 10×10 μm2, pitch: 100 μm) are fabricated on silicon substrate by conventional planar process, and then every emitter is bonded with integrated complementary metal-oxide-semiconductor (CMOS) driver using through-silicon-via (TSV) interconnect technology. Electron multi-beams emitted from selected devices are focused by a micro-electro-mechanical system (MEMS) condenser lens array and introduced into an accelerating system with a demagnification factor of 100. The electron accelerating voltage is 5 kV. The designed size of each beam landing on the target is 10×10 nm2 in square. Here we discuss the fabrication process of the emitter array with TSV holes, implementation of integrated ctive-matrix driver circuit, the bonding of these components, the construction of electron optics, and the overall operation in the exposure system including the correction of possible aberrations. The experimental results of this mask-less parallel pattern transfer are shown in terms of simple 1:1 projection and parallel lithography under an active-matrix drive scheme. Another application is the use of this emitter as an active electrode supplying highly reducing electrons into solutions. A very small amount of metal-salt solutions is dripped onto the nc-Si emitter surface, and the emitter is driven without

6. The OPERA experiment:. a direct search of the νμ → ντ oscillations

Marteau, J.

2010-04-01

The aim of the OPERA experiment is to search for the appearance of the tau neutrino in the quasi pure muon neutrino beam produced at CERN (CNGS). The detector, installed in the Gran Sasso underground laboratory 730 km away from CERN, consists of a lead/emulsion target complemented with electronic detectors. After a short pilot run in 2007, a first physics run took place from June to November 2008. The second physics run started in June 2009. At present a total (2008+2009) of 4.2 1019 protons on target were delivered by the CNGS, producing more than 25,000 events in time coincidence in the OPERA detector. Among them 4000 events occured in the target of the detector. In this paper the detector and the analysis strategy are described and the status of the analysis of the 2008 and 2009 runs is discussed.

7. Experiments on Direct Dark Matter Search with Two-phase Emission Detectors

Bolozdynya, A. I.

Emission detectors, invented 45 years ago in MEPhI, found their unique application in modern experiments searching for cold dark matter in the form of weakly ionizing massive particles (WIMPs). The current best limits for the interaction cross sections of supersymmetric WIMPs having a mass of 100 GeV/c2 with nucleons were measured with emission detector LUX containing 360 kg of liquid xenon as detector medium installed in Davis' cave at Homestake mine in South Dakota. Emission detectors of the next generation G2, with an active detector mass of about 10 tons, will either unambiguously detect WIMPs or rule out all current theoretical predictions for WIMP existence. Detectors of the G3 generation will be used for multiple purposes including detection of double beta neutrinoless decay and low-energy neutrinos

8. 3-D magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on SMP computers - Part I: forward problem and parameter Jacobians

Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

2016-01-01

We have developed an algorithm, which we call HexMT, for 3-D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permit incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used throughout, including the forward solution, parameter Jacobians and model parameter update. In Part I, the forward simulator and Jacobian calculations are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequencies or small material admittivities, the E-field requires divergence correction. With the help of Hodge decomposition, the correction may be applied in one step after the forward solution is calculated. This allows accurate E-field solutions in dielectric air. The system matrix factorization and source vector solutions are computed using the MKL PARDISO library, which shows good scalability through 24 processor cores. The factorized matrix is used to calculate the forward response as well as the Jacobians of electromagnetic (EM) field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure, several synthetic topographic models and the natural topography of Mount Erebus in Antarctica. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of EM waves normal to the slopes at high frequencies. Run-time tests of the parallelized algorithm indicate that for meshes as large as 176 × 176 × 70 elements, MT forward responses and Jacobians can be calculated in ˜1.5 hr per frequency. Together with an efficient inversion parameter step described in Part II, MT inversion problems of 200-300 stations are computable with total run times

9. Ab initio materials design using conformational space annealing and its application to searching for direct band gap silicon crystals

Lee, In-Ho; Oh, Young Jun; Kim, Sunghyun; Lee, Jooyoung; Chang, K. J.

2016-06-01

Lately, the so-called inverse method of materials design has drawn much attention, where specific material properties are initially assigned and target materials are subsequently searched for. Although this method has been successful for some problems, the success of designing complex crystal structures containing many atoms is often limited by the efficiency of the search method utilized. Here we combine the global optimization method of conformational space annealing (CSA) with first-principles quantum calculations and report a new scheme named AMADEUS (Ab initio MAterials DEsign Using cSa). We demonstrate the utility of AMADEUS through the discovery of direct band gap Si crystals. The newly-designed direct gap Si allotropes show excellent optical properties and the spectroscopic limited maximum efficiencies comparable to those of best-known non-silicon photovoltaic materials. Our scheme not only provides a new perspective for the inverse problem of materials design but also may serve as a new tool for the computational design of a wide range of materials.

10. Egocentric search for disappearing objects in domestic dogs: evidence for a geometric hypothesis of direction.

PubMed

Fiset, Sylvain; Landry, France; Ouellette, Manon

2006-01-01

In several species, the ability to locate a disappearing object is an adaptive component of predatory and social behaviour. In domestic dogs, spatial memory for hidden objects is primarily based on an egocentric frame of reference. We investigated the geometric components of egocentric spatial information used by domestic dogs to locate an object they saw move and disappear. In experiment 1, the distance and the direction between the position of the animal and the hiding location were put in conflict. Results showed that the dogs primarily used the directional information between their own spatial coordinates and the target position. In experiment 2, the accuracy of the dogs in finding a hidden object by using directional information was estimated by manipulating the angular deviation between adjacent hiding locations and the position of the animal. Four angular deviations were tested: 5, 7.5, 10 and 15 degrees . Results showed that the performance of the dogs decreased as a function of the angular deviations but it clearly remained well above chance, revealing that the representation of the dogs for direction is precise. In the discussion, we examine how and why domestic dogs determine the direction in which they saw an object disappear. PMID:15750805

11. Directional resolution of dish antenna experiments to search for WISPy dark matter

Jaeckel, Joerg; Knirck, Stefan

2016-01-01

Dark matter consisting of very light and very weakly interacting particles such as axions, axion-like particles and hidden photons could be detected using reflective surfaces. On such reflectors some of the dark matter particles are converted into photons and, given a suitable geometry, concentrated on the detector. This technique offers sensitivity to the direction of the velocity of the dark matter particles. In this note we investigate how far spherical mirrors can concentrate the generated photons and what this implies for the resolution in directional detection as well as the sensitivity of discovery experiments not aiming for directional resolution. Finally we discuss an improved setup using a combination of a reflecting plane with focussing optics.

12. Search for direct CP violation in baryonic b-hadron decays

Geng, C. Q.; Hsiao, Y. K.

2016-07-01

We first review direct CP violation in the three-body baryonic B decays of B±→ pp¯M i±(i = P,V ) with MP = π,K and MV = ρ,K∗. We then present our recent results of the direct CP violating asymmetries in the two-body Λb decays of Λb → pMi as well as the three-body Λb → J/ΨpMP. In particular, we emphasize that the large direct CP violating asymmetries in B±→ pp¯K∗± and Λb → (pK∗‑,p¯K∗+) are both around 20%, which are accessible to the b-hadron experiments.

13. Search for a Correlation between ANTARES Neutrinos and Pierre Auger Observatory UHECRs Arrival Directions

Adrián-Martínez, S.; Samarai, I. Al; Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Anvar, S.; Ardid, M.; Astraatmadja, T.; Aubert, J.-J.; Baret, B.; Basa, S.; Beemster, L. J.; Bertin, V.; Biagi, S.; Bigongiari, C.; Bogazzi, C.; Bou-Cabo, M.; Bouhou, B.; Bouwhuis, M. C.; Brunner, J.; Busto, J.; Camarena, F.; Capone, A.; Cârloganu, C.; Carminati, G.; Carr, J.; Cecchini, S.; Charif, Z.; Charvis, Ph.; Chiarusi, T.; Circella, M.; Coniglione, R.; Core, L.; Costantini, H.; Coyle, P.; Creusot, A.; Curtil, C.; De Bonis, G.; Decowski, M. P.; Dekeyser, I.; Deschamps, A.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti, Q.; Drouhin, D.; Eberl, T.; Emanuele, U.; Enzenhöfer, A.; Ernenwein, J.-P.; Escoffier, S.; Fehn, K.; Fermani, P.; Ferri, M.; Ferry, S.; Flaminio, V.; Folger, F.; Fritsch, U.; Fuda, J.-L.; Galatà, S.; Gay, P.; Geyer, K.; Giacomelli, G.; Giordano, V.; Gómez-González, J. P.; Graf, K.; Guillard, G.; Halladjian, G.; Hallewell, G.; van Haren, H.; Hartman, J.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Herold, B.; Hößl, J.; Hsu, C. C.; de Jong, M.; Kadler, M.; Kalekin, O.; Kappes, A.; Katz, U.; Kavatsyuk, O.; Kooijman, P.; Kopper, C.; Kouchner, A.; Kreykenbohm, I.; Kulikovskiy, V.; Lahmann, R.; Lambard, G.; Larosa, G.; Lattuada, D.; Lefèvre, D.; Lim, G.; Lo Presti, D.; Loehner, H.; Loucatos, S.; Louis, F.; Mangano, S.; Marcelin, M.; Margiotta, A.; Martínez-Mora, J. A.; Meli, A.; Montaruli, T.; Morganti, N.; Moscoso, L.; Motz, H.; Neff, M.; Nezri, E.; Palioselitis, D.; Păvălaş, G. E.; Payet, K.; Payre, P.; Petrovic, J.; Picot-Clemente, N.; Popa, V.; Pradier, T.; Presani, E.; Racca, C.; Reed, C.; Riccobene, G.; Richardt, C.; Richter, R.; Rivière, C.; Robert, A.; Roensch, K.; Rostovtsev, A.; Ruiz-Rivas, J.; Rujoiu, M.; Russo, G. V.; Salesa, F.; Samtleben, D. F. E.; Sánchez-Losa, A.; Sapienza, P.; Schöck, F.; Schuller, J.-P.; Schüssler, F.; Seitz, T.; Shanidze, R.; Simeone, F.; Spies, A.; Spurio, M.; Steijger, J. J. M.; Stolarczyk, Th.; Taiuti, M.; Tamburini, C.; Toscano, S.; Vallage, B.; Vallée, C.; Van Elewyck, V.; Vannoni, G.; Vecchi, M.; Vernin, P.; Visser, E.; Wagner, S.; Wijnker, G.; Wilms, J.; de Wolf, E.; Yepes, H.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.

2013-09-01

A multimessenger analysis optimized for a correlation of arrival directions of ultra-high energy cosmic rays (UHECRs) and neutrinos is presented and applied to 2190 neutrino candidate events detected in 2007-2008 by the ANTARES telescope and 69 UHECRs observed by the Pierre Auger Observatory between 2004 January 1 and 2009 December 31. No significant correlation is observed. Assuming an equal neutrino flux (E -2 energy spectrum) from all UHECR directions, a 90% CL upper limit on the neutrino flux of 5.0 × 10-8 GeV cm-2 s-1 per source is derived.

14. Should ground-motion records be rotated to fault-normal/parallel or maximum direction for response history analysis of buildings?

USGS Publications Warehouse

Reyes, Juan C.; Kalkan, Erol

2012-01-01

In the United States, regulatory seismic codes (for example, California Building Code) require at least two sets of horizontal ground-motion components for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 kilometers (3.1 miles) of an active fault, these records should be rotated to fault-normal and fault-parallel (FN/FP) directions, and two RHAs should be performed separately—when FN and then FP direction are aligned with transverse direction of the building axes. This approach is assumed to lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. The validity of this assumption is examined here using 3D computer models of single-story structures having symmetric (torsionally stiff) and asymmetric (torsionally flexible) layouts subjected to an ensemble of near-fault ground motions with and without apparent velocity pulses. In this parametric study, the elastic vibration period is varied from 0.2 to 5 seconds, and yield-strength reduction factors, R, are varied from a value that leads to linear-elastic design to 3 and 5. Further validations are performed using 3D computer models of 9-story structures having symmetric and asymmetric layouts subjected to the same ground-motion set. The influence of the ground-motion rotation angle on several engineering demand parameters (EDPs) is examined in both linear-elastic and nonlinear-inelastic domains to form benchmarks for evaluating the use of the FN/FP directions and also the maximum direction (MD). The MD ground motion is a new definition for horizontal ground motions for use in site-specific ground-motion procedures for seismic design according to provisions of the American Society of Civil Engineers/Seismic Engineering Institute (ASCE/SEI) 7-10. The results of this study have important implications for current practice, suggesting that ground motions rotated to MD or FN/FP directions do not necessarily provide

15. Parallel rendering

NASA Technical Reports Server (NTRS)

Crockett, Thomas W.

1995-01-01

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

16. Tools for model-independent bounds in direct dark matter searches

SciTech Connect

Cirelli, Marco

2013-10-01

We discuss a framework (based on non-relativistic operators) and a self-contained set of numerical tools to derive the bounds from some current direct detection experiments on virtually any arbitrary model of Dark Matter elastically scattering on nuclei.

17. A Search for New Directions in the War Against Poverty. Staff Paper.

ERIC Educational Resources Information Center

Sheppard, Harold L.

Demographic surveys and data could be used to assess programs and policies directly and indirectly concerned with the reduction of poverty, and, through the use of such survey data, to point to a number of population subgroupings which are or are not moving out of poverty. Annually collected Census Bureau facts, the basis of much of the analysis…

18. Direct imaging search for planets around low-mass stars and spectroscopic characterization of young exoplanets

Bowler, Brendan Peter

Low--mass stars between 0.1--0.6 M⊙ are the most abundant members our galaxy and may be the most common sites of planet formation, but little is known about the outer architecture of their planetary systems. We have carried out a high-contrast adaptive imaging search for gas giant planets between 1--13 MJup around 122 newly identified young M dwarfs in the solar neighborhood ( ≲ 35 pc). Half of our targets are younger than 145 Myr, and 90% are younger than 580 Myr. After removing 39 resolved stellar binaries, our homogeneous sample of 83 single young M dwarfs makes it the largest imaging search for planets around low--mass stars to date. Our H- and K- band coronagraphic observations with Subaru/HiCIAO and Keck/NIRC2 achieve typical contrasts of 9--13 mag and 12--14 mag at 100, respectively, which corresponds to limiting masses of ˜1--10 M Jup at 10--30 AU for most of our sample. We discovered four brown dwarfs with masses between 25--60 MJup at projected separations of 4--190 AU. Over 100 candidate planets were discovered, nearly all of which were found to be background stars from follow-up second epoch imaging. Our null detection of planets nevertheless provides strong statistical constraints on the occurrence rate of giant planets around M dwarfs. Assuming circular orbits and a logarithmically-flat power law distribution in planet mass and semi--major axis of the form d 2N=(dloga dlogm) infinity m0 a0, we measure an upper limit (at the 95% confidence level) of 8.8% and 12.6% for 1--13 MJup companions between 10--100 AU for hot start and cold start evolutionary models, respectively. For massive gas giant planets in the 5--13 M Jup range like those orbiting HR 8799, GJ 504, and beta Pictoris, we find that fewer than 5.3% (7.8%) of M dwarfs harbor these planets between 10--100 AU for a hot start (cold start) formation scenario. Our best constraints are for brown dwarf companions; the frequency of 13--75 MJup companions between (de--projected) physical

19. A direct imaging search for close stellar and sub-stellar companions to young nearby stars

Vogt, N.; Mugrauer, M.; Neuhäuser, R.; Schmidt, T. O. B.; Contreras-Quijada, A.; Schmidt, J. G.

2015-01-01

A total of 28 young nearby stars (ages {≤ 60} Myr) have been observed in the K_s-band with the adaptive optics imager Naos-Conica of the Very Large Telescope at the Paranal Observatory in Chile. Among the targets are ten visual binaries and one triple system at distances between 10 and 130 pc, all previously known. During a first observing epoch a total of 20 faint stellar or sub-stellar companion-candidates were detected around seven of the targets. These fields, as well as most of the stellar binaries, were re-observed with the same instrument during a second epoch, about one year later. We present the astrometric observations of all binaries. Their analysis revealed that all stellar binaries are co-moving. In two cases (HD 119022 AB and FG Aqr B/C) indications for significant orbital motions were found. However, all sub-stellar companion candidates turned out to be non-moving background objects except PZ Tel which is part of this project but whose results were published elsewhere. Detection limits were determined for all targets, and limiting masses were derived adopting three different age values; they turn out to be less than 10 Jupiter masses in most cases, well below the brown dwarf mass range. The fraction of stellar multiplicity and of the sub-stellar companion occurrence in the star forming regions in Chamaeleon are compared to the statistics of our search, and possible reasons for the observed differences are discussed. Based on observations made with ESO telescopes at Paranal Observatory under programme IDs 083.C-0150(B), 084.C-0364(A), 084.C-0364(B), 084.C-0364(C), 086.C-0600(A) and 086.C-0600(B).

20. A novel method of wide searching scope and fast searching speed for image block matching

Yu, Fei; Li, Chao; Mei, Qiang; Lin, Zhe

2015-10-01

When the image matching method is used for motion estimation, the performance parameters like searching scope, searching speed, accuracy and robustness of the method normally are significant and need enhancement. In this paper, a novel method of block matching containing the wide range image block matching strategy and the strategy of multi-start points and parallel searching are presented. In the wide range matching strategy, the size of template block and searching block are same. And the average value of cumulative results by pixels in calculation is taken to ensure matching parameters can accurately represent the matching degree. In the strategy of multi-start points and parallel searching, the way of choosing starting points evenly is presented based on the characteristic of the block matching search, and the adaptive conditions and adaptive schedule is established based on the searching region. In the processing of iteration, the new strategy can not only adapt to the solution that lead the objective to the correct direction, but also adapt to the solution that have a little offset comparing with the objective. Therefore the multi-start points and parallel searching algorithm can be easy to keep from the trap of local minima effectively. The image processing system based on the DSP chip of TMS320C6415 is used to make the experiment for the video image stabilization. The results of experiment show that, the application of two methods can improve the range of motion estimation and reduce the searching computation.

1. A search for strong-field direct two electron ionization using coincidence spectroscopy

SciTech Connect

Agostini, P.; Mevel, E.; Breger, P.; Walker, B.; Yang, B.; DiMauro, L.F.

1993-05-01

We report on our program in detecting two-electron ionization using electron-electron and electron-ion coincidence measurements. The coincidence techniques have been applied to the multiphoton ionization (MPI) of xenon atoms with 0.527 {mu}m excitation. The results show that direct two electron ionization is not occurring which is in variance with an earlier report. We also present a polarization study on the MPI of helium at 0.62 {mu}m and discuss these results in context of existing models.

2. A search for strong-field direct two electron ionization using coincidence spectroscopy

SciTech Connect

Agostini, P.; Mevel, E.; Breger, P. . Service de Recherches sur les Surfaces et de l'Irradiation de la Matiere); Walker, B.; Yang, B.; DiMauro, L.F. )

1993-01-01

We report on our program in detecting two-electron ionization using electron-electron and electron-ion coincidence measurements. The coincidence techniques have been applied to the multiphoton ionization (MPI) of xenon atoms with 0.527 [mu]m excitation. The results show that direct two electron ionization is not occurring which is in variance with an earlier report. We also present a polarization study on the MPI of helium at 0.62 [mu]m and discuss these results in context of existing models.

3. Measurement and modeling of muon-induced neutrons in LSM in application for direct dark matter searches

SciTech Connect

Kozlov, Valentin; Collaboration: EDELWEISS Collaboration

2013-08-08

Due to a very low event rate expected in direct dark matter search experiments, a good understanding of every background component is crucial. Muon-induced neutrons constitute a prominent background, since neutrons lead to nuclear recoils and thus can mimic a potential dark matter signal. EDELWEISS is a Ge-bolometer experiment searching for WIMP dark matter. It is located in the Laboratoire Souterrain de Modane (LSM, France). We have measured muon-induced neutrons by means of a neutron counter based on Gd-loaded liquid scintillator. Studies of muon-induced neutrons are presented and include development of the appropriate MC model based on Geant4 and analysis of a 1000-days measurement campaign in LSM. We find a good agreement between measured rates of muon-induced neutrons and those predicted by the developed model with full event topology. The impact of the neutron background on current EDELWEISS data-taking as well as for next generation experiments such as EURECA is briefly discussed.

4. Directed searches for continuous gravitational waves from binary systems: Parameter-space metrics and optimal Scorpius X-1 sensitivity

Leaci, Paola; Prix, Reinhard

2015-05-01

We derive simple analytic expressions for the (coherent and semicoherent) phase metrics of continuous-wave sources in low-eccentricity binary systems for the two regimes of long and short segments compared to the orbital period. The resulting expressions correct and extend previous results found in the literature. We present results of extensive Monte Carlo studies comparing metric mismatch predictions against the measured loss of detection statistics for binary parameter offsets. The agreement is generally found to be within ˜10 %- 30 % . For an application of the metric template expressions, we estimate the optimal achievable sensitivity of an Einstein@Home directed search for Scorpius X-1, under the assumption of sufficiently small spin wandering. We find that such a search, using data from the upcoming advanced detectors, would be able to beat the torque-balance level [R. V. Wagoner, Astrophys. J. 278, 345 (1984); L. Bildsten, Astrophys. J. 501, L89 (1998).] up to a frequency of ˜500 - 600 Hz , if orbital eccentricity is well constrained, and up to a frequency of ˜160 - 200 Hz for more conservative assumptions about the uncertainty on orbital eccentricity.

5. Searches for anisotropies in the arrival directions of the highest energy cosmic rays detected by the Pierre Auger Observatory

DOE PAGESBeta

Aab, Alexander

2015-05-01

We analyze the distribution of arrival directions of ultra-high-energy cosmic rays recorded at the Pierre Auger Observatory in 10 years of operation. The data set, about three times larger than that used in earlier studies, includes arrival directions with zenith angles up to 80°, thus covering from -90° to +45° in declination. After updating the fraction of events correlating with the active galactic nuclei (AGNs) in the Véron-Cetty and Véron catalog, we subject the arrival directions of the data with energies in excess of 40 EeV to different tests for anisotropy. We search for localized excess fluxes, self-clustering of event directions at angular scales up to 30°, and different threshold energies between 40 and 80 EeV. We then look for correlations of cosmic rays with celestial structures both in the Galaxy (the Galactic Center and Galactic Plane) and in the local universe (the Super-Galactic Plane). We also examine their correlation with different populations of nearby extragalactic objects: galaxies in the 2MRS catalog, AGNs detected by Swift-BAT, radio galaxies with jets, and the Centaurus A (Cen A) galaxy. None of the tests show statistically significant evidence of anisotropy. As a result, the strongest departures from isotropy (post-trial probabilitymore » $$\\sim 1.4$$%) are obtained for cosmic rays with $$E\\gt 58$$ EeV in rather large windows around Swift AGNs closer than 130 Mpc and brighter than 1044 erg s-1 (18° radius), and around the direction of Cen A (15° radius).« less

6. Searches for anisotropies in the arrival directions of the highest energy cosmic rays detected by the Pierre Auger Observatory

SciTech Connect

Aab, Alexander

2015-05-01

We analyze the distribution of arrival directions of ultra-high-energy cosmic rays recorded at the Pierre Auger Observatory in 10 years of operation. The data set, about three times larger than that used in earlier studies, includes arrival directions with zenith angles up to 80°, thus covering from -90° to +45° in declination. After updating the fraction of events correlating with the active galactic nuclei (AGNs) in the Véron-Cetty and Véron catalog, we subject the arrival directions of the data with energies in excess of 40 EeV to different tests for anisotropy. We search for localized excess fluxes, self-clustering of event directions at angular scales up to 30°, and different threshold energies between 40 and 80 EeV. We then look for correlations of cosmic rays with celestial structures both in the Galaxy (the Galactic Center and Galactic Plane) and in the local universe (the Super-Galactic Plane). We also examine their correlation with different populations of nearby extragalactic objects: galaxies in the 2MRS catalog, AGNs detected by Swift-BAT, radio galaxies with jets, and the Centaurus A (Cen A) galaxy. None of the tests show statistically significant evidence of anisotropy. As a result, the strongest departures from isotropy (post-trial probability $\\sim 1.4$%) are obtained for cosmic rays with $E\\gt 58$ EeV in rather large windows around Swift AGNs closer than 130 Mpc and brighter than 1044 erg s-1 (18° radius), and around the direction of Cen A (15° radius).

7. Searches for Anisotropies in the Arrival Directions of the Highest Energy Cosmic Rays Detected by the Pierre Auger Observatory

Aab, A.; Abreu, P.; Aglietta, M.; Ahn, E. J.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Aranda, V. M.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Awal, N.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blaess, S. G.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Bridgeman, A.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Cordier, A.; Coutu, S.; Covault, C. E.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Freire, M. M.; Fröhlich, U.; Fuchs, B.; Fujii, T.; Gaior, R.; García, B.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; González, N.; Gookin, B.; Gordon, J.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Hartmann, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Malacari, M.; Maldera, S.; Mallamaci, M.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Meissner, R.; Melissas, M.; Melo, D.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Müller, S.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nguyen, P. H.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Oliveira, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pȩkala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Petermann, E.; Peters, C.; Petrera, S.; Petrov, Y.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Rogozin, D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, D.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tepe, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villase ñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Zuccarello, F.; Pierre Auger Collaboration

2015-05-01

We analyze the distribution of arrival directions of ultra-high-energy cosmic rays recorded at the Pierre Auger Observatory in 10 years of operation. The data set, about three times larger than that used in earlier studies, includes arrival directions with zenith angles up to 80°, thus covering from -90{}^\\circ to +45{}^\\circ in declination. After updating the fraction of events correlating with the active galactic nuclei (AGNs) in the Véron-Cetty and Véron catalog, we subject the arrival directions of the data with energies in excess of 40 EeV to different tests for anisotropy. We search for localized excess fluxes, self-clustering of event directions at angular scales up to 30°, and different threshold energies between 40 and 80 EeV. We then look for correlations of cosmic rays with celestial structures both in the Galaxy (the Galactic Center and Galactic Plane) and in the local universe (the Super-Galactic Plane). We also examine their correlation with different populations of nearby extragalactic objects: galaxies in the 2MRS catalog, AGNs detected by Swift-BAT, radio galaxies with jets, and the Centaurus A (Cen A) galaxy. None of the tests show statistically significant evidence of anisotropy. The strongest departures from isotropy (post-trial probability ˜ 1.4%) are obtained for cosmic rays with E\\gt 58 EeV in rather large windows around Swift AGNs closer than 130 Mpc and brighter than 1044 erg s-1 (18° radius), and around the direction of Cen A (15° radius).

8. Dark matter direct detection constraints on the minimal supersymmetric standard model and implications for LHC Higgs boson searches

SciTech Connect

Cao, Junjie; Hikasa, Ken-ichi; Wang, Wenyu; Yang, Jin Min; Yu, Li-Xin

2010-09-01

Assuming the lightest neutralino solely composes the cosmic dark matter, we examine the constraints of the CDMS-II and XENON100 dark matter direct searches on the parameter space of the minimal supersymmetric standard model (MSSM) Higgs sector. We find that the current CDMS-II/XENON100 limits can exclude some of the parameter space which survive the constraints from the dark matter relic density and various collider experiments. We also find that in the currently allowed parameter space, the charged Higgs boson is hardly accessible at the LHC for an integrated luminosity of 30 fb{sup -1}, while the neutral non-SM (standard model) Higgs bosons (H,A) may be accessible in some allowed region characterized by a large {mu}. The future XENON100 (6000 kg-days exposure) will significantly tighten the parameter space in case of nonobservation of dark matter.

9. Search for the Galactic Disk and Halo Components in the Arrival Directions of High-Energy Astrophysical Neutrinos

Troitsky, S. V.

2015-12-01

The arrival directions of 40 neutrino events with energies ≳100 TeV, observed by the IceCube experiment, are studied. Their distribution in the Galactic latitude and in the angular distance to the Galactic Center allow searching for the Milky-Way disk and halo-related components, respectively. No statistically significant evidence for the disk component is found, though even 100% disk origin of the flux is allowed at the 90% confidence level. Contrary, the Galactic Center-Anticenter dipole anisotropy, specific for dark-matter decays (annihilation) or for interactions of cosmic rays with the extended halo of the circumgalactic gas, is clearly favored over the isotropic distribution (the probability of fluctuation of the isotropic signal is ~2%).

10. Searches for Direct CP Violation in D+ Decays And for D0 Anti-D0 Mixing

SciTech Connect

Purohit, M.V.; /South Carolina U.

2005-10-11

The authors present preliminary results of a search for direct CP violation in D{sup +} {yields} K{sup +}K{sup -} {pi}{sup +} decays using 87 fb{sup -1} of data acquired by the Babar experiment running on and near the {Upsilon}(4S) from 1999-2002. The authors report the asymmetries in the signal mode and in the main resonant subchannels. Based on the same dataset, they also report a new 90% CL upper limit of 0.0042 on the rate of D{sup 0}-{bar D}{sup 0} mixing using the decay modes D*{sup +} {yields} D{sup 0}{pi}{sup +}, D{sup 0} {yields} [K/K*]ev (+c.c.).

11. Identification of Unknown Interface Locations in a Source/Shield System Using the Mesh Adaptive Direct Search Method

SciTech Connect

Armstrong, Jerawan C.; Favorite, Jeffrey A.

2012-06-20

The Levenberg-Marquardt (or simply Marquardt) and differential evolution (DE) optimization methods were recently applied to solve inverse transport problems. The Marquardt method is fast but convergence of the method is dependent on the initial guess. While it has been shown to work extremely well at finding an optimum independent of the initial guess, the DE method does not provide a global optimal solution in some problems. In this paper, we apply the Mesh Adaptive Direct Search (MADS) algorithm to solve the inverse problem of material interface location identification in one-dimensional spherical radiation source/shield systems, and we compare the results obtained by MADS to those obtained by Levenberg-Marquardt and DE.

12. Global interpretation of direct Dark Matter searches after CDMS-II results

SciTech Connect

Kopp, Joachim; Schwetz, Thomas; Zupan, Jure

2009-12-01

We perform a global fit to data from Dark Matter (DM) direct detection experiments, including the recent CDMS-II results. We discuss possible interpretations of the DAMA annual modulation signal in terms of spin-independent and spin-dependent DM-nucleus interactions, both for elastic and inelastic scattering. We find that for the spin-dependent inelastic scattering off protons a good fit to all data is obtained. We present a simple toy model realizing such a scenario. In all the remaining cases the DAMA allowed regions are disfavored by other experiments or suffer from severe fine tuning of DM parameters with respect to the galactic escape velocity. Finally, we also entertain the possibility that the two events observed in CDMS-II are an actual signal of elastic DM scattering, and we compare the resulting CDMS-II allowed regions to the exclusion limits from other experiments.

13. Searching for a Mate: Pheromone-Directed Movement of the Benthic Diatom Seminavis robusta.

PubMed

Bondoc, Karen Grace V; Lembke, Christine; Vyverman, Wim; Pohnert, Georg

2016-08-01

Diatoms are species-rich microalgae that often have a unique life cycle with vegetative cell size reduction followed by size restoration through sexual reproduction of two mating types (MT(+) and MT(-)). In the marine benthic diatom Seminavis robusta, mate-finding is mediated by an L-proline-derived diketopiperazine, a pheromone produced by the attracting mating type (MT(-)). Here, we investigate the movement patterns of cells of the opposite mating type (MT(+)) exposed to a pheromone gradient, using video monitoring and statistical modeling. We report that cells of the migrating mating type (MT(+)) respond to pheromone gradients by simultaneous chemotaxis and chemokinesis. Changes in movement behavior enable MT(+) cells to locate the direction of the pheromone source and to maximize their encounter rate towards it. PMID:27260155

14. SEEK: A FORTRAN optimization program using a feasible directions gradient search

NASA Technical Reports Server (NTRS)

Savage, M.

1995-01-01

This report describes the use of computer program 'SEEK' which works in conjunction with two user-written subroutines and an input data file to perform an optimization procedure on a user's problem. The optimization method uses a modified feasible directions gradient technique. SEEK is written in ANSI standard Fortran 77, has an object size of about 46K bytes, and can be used on a personal computer running DOS. This report describes the use of the program and discusses the optimizing method. The program use is illustrated with four example problems: a bushing design, a helical coil spring design, a gear mesh design, and a two-parameter Weibull life-reliability curve fit.

15. Massively parallel visualization: Parallel rendering

SciTech Connect

Hansen, C.D.; Krogh, M.; White, W.

1995-12-01

This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

16. ZEPLIN-III direct dark matter search : final results and measurements in support of next generation instruments

Reichhart, Lea

2013-12-01

Astrophysical observations give convincing evidence for a vast non-baryonic component, the so-called dark matter, accounting for over 20% of the overall content of our Universe. Direct dark matter search experiments explore the possibility of interactions of these dark matter particles with ordinary baryonic matter via elastic scattering resulting in single nuclear recoils. The ZEPLIN-III detector operated on the basis of a dualphase (liquid/gas) xenon target, recording events in two separate response channels { scintillation and ionisation. These allow discrimination between electron recoils (from background radiation) and the signal expected from Weakly Interacting Massive Particle (WIMP) elastic scatters. Following a productive first exposure, the detector was upgraded with a new array of ultra-low background photomultiplier tubes, reducing the electron recoil background by over an order of magnitude. A second major upgrade to the detector was the incorporation of a tonne-scale active veto detector system, surrounding the WIMP target. Calibration and science data taken in coincidence with ZEPLIN-III showed rejection of up to 30% of the dominant electron recoil background and over 60% of neutron induced nuclear recoils. Data taking for the second science run finished in May 2011 with a total accrued raw fiducial exposure of 1,344 kg days. With this extensive data set, from over 300 days of run time, a limit on the spin-independent WIMP-nucleon cross-section of 4.8 10-8 pb near 50 GeV/c2 WIMP mass with 90% confidence was set. This result combined with the first science run of ZEPLIN-III excludes the scalar cross-section above 3.9 10-8 pb. Studying the background data taken by the veto detector allowed a calculation of the neutron yield induced by high energy cosmic-ray muons in lead of (5.8 0.2) 10-3 neutrons/muon/(g/cm2) for a mean muon energy of 260 GeV. Measurements of this kind are of great importance for large scale direct dark matter search experiments and

17. Direct search for a ferromagnetic phase in a heavily overdoped nonsuperconducting copper oxide.

PubMed

Sonier, J E; Kaiser, C V; Pacradouni, V; Sabok-Sayr, S A; Cochrane, C; MacLaughlin, D E; Komiya, S; Hussey, N E

2010-10-01

The doping of charge carriers into the CuO(2) planes of copper oxide Mott insulators causes a gradual destruction of antiferromagnetism and the emergence of high-temperature superconductivity. Optimal superconductivity is achieved at a doping concentration p beyond which further increases in doping cause a weakening and eventual disappearance of superconductivity. A potential explanation for this demise is that ferromagnetic fluctuations compete with superconductivity in the overdoped regime. In this case, a ferromagnetic phase at very low temperatures is predicted to exist beyond the doping concentration at which superconductivity disappears. Here we report on a direct examination of this scenario in overdoped La(2-x)Sr(x)CuO(4) using the technique of muon spin relaxation. We detect the onset of static magnetic moments of electronic origin at low temperature in the heavily overdoped nonsuperconducting region. However, the magnetism does not exist in a commensurate long-range ordered state. Instead it appears as a dilute concentration of static magnetic moments. This finding places severe restrictions on the form of ferromagnetism that may exist in the overdoped regime. Although an extrinsic impurity cannot be absolutely ruled out as the source of the magnetism that does occur, the results presented here lend support to electronic band calculations that predict the occurrence of weak localized ferromagnetism at high doping. PMID:20855579

18. Direct search for a ferromagnetic phase in a heavily overdoped nonsuperconducting copper oxide

PubMed Central

Sonier, J. E.; Kaiser, C. V.; Pacradouni, V.; Sabok-Sayr, S. A.; Cochrane, C.; MacLaughlin, D. E.; Komiya, S.; Hussey, N. E.

2010-01-01

The doping of charge carriers into the CuO2 planes of copper oxide Mott insulators causes a gradual destruction of antiferromagnetism and the emergence of high-temperature superconductivity. Optimal superconductivity is achieved at a doping concentration p beyond which further increases in doping cause a weakening and eventual disappearance of superconductivity. A potential explanation for this demise is that ferromagnetic fluctuations compete with superconductivity in the overdoped regime. In this case, a ferromagnetic phase at very low temperatures is predicted to exist beyond the doping concentration at which superconductivity disappears. Here we report on a direct examination of this scenario in overdoped La2-xSrxCuO4 using the technique of muon spin relaxation. We detect the onset of static magnetic moments of electronic origin at low temperature in the heavily overdoped nonsuperconducting region. However, the magnetism does not exist in a commensurate long-range ordered state. Instead it appears as a dilute concentration of static magnetic moments. This finding places severe restrictions on the form of ferromagnetism that may exist in the overdoped regime. Although an extrinsic impurity cannot be absolutely ruled out as the source of the magnetism that does occur, the results presented here lend support to electronic band calculations that predict the occurrence of weak localized ferromagnetism at high doping. PMID:20855579

19. Prospects for detection of target-dependent annual modulation in direct dark matter searches

Del Nobile, Eugenio; Gelmini, Graciela B.; Witte, Samuel J.

2016-02-01

Earth's rotation about the Sun produces an annual modulation in the expected scattering rate at direct dark matter detection experiments. The annual modulation as a function of the recoil energy ER imparted by the dark matter particle to a target nucleus is expected to vary depending on the detector material. However, for most interactions a change of variables from ER to vmin, the minimum speed a dark matter particle must have to impart a fixed ER to a target nucleus, produces an annual modulation independent of the target element. We recently showed that if the dark matter-nucleus cross section contains a non-factorizable target and dark matter velocity dependence, the annual modulation as a function of vmin can be target dependent. Here we examine more extensively the necessary conditions for target-dependent modulation, its observability in present-day experiments, and the extent to which putative signals could identify a dark matter-nucleus differential cross section with a non-factorizable dependence on the dark matter velocity.

20. Making sense of the local Galactic escape speed estimates in direct dark matter searches

Lavalle, Julien; Magni, Stefano

2015-01-01

Direct detection (DD) of dark matter (DM) candidates in the ≲10 GeV mass range is very sensitive to the tail of their velocity distribution. The important quantity is the maximum weakly interacting massive particle speed in the observer's rest frame, i.e. in average the sum of the local Galactic escape speed vesc and of the circular velocity of the Sun vc. While the latter has been receiving continuous attention, the former is more difficult to constrain. The RAVE Collaboration has just released a new estimate of vesc [T. Piffl et al., Astron. Astrophys. 562, A91 (2014)] that supersedes the previous one [M. C. Smith, et al. Mon. Not. R. Astron. Soc. 379, 755 (2007)], which is of interest in the perspective of reducing the astrophysical uncertainties in DD. Nevertheless, these new estimates cannot be used blindly as they rely on assumptions in the dark halo modeling which induce tight correlations between the escape speed and other local astrophysical parameters. We make a self-consistent study of the implications of the RAVE results on DD assuming isotropic DM velocity distributions, both Maxwellian and ergodic. Taking as references the experimental sensitivities currently achieved by LUX, CRESST-II, and SuperCDMS, we show that (i) the exclusion curves associated with the best-fit points of P14 may be more constraining by up to ˜40 % with respect to standard limits, because the underlying astrophysical correlations induce a larger local DM density, and (ii) the corresponding relative uncertainties inferred in the low weakly interacting massive particle mass region may be moderate, down to 10-15% below 10 GeV. We finally discuss the level of consistency of these results with other independent astrophysical constraints. This analysis is complementary to others based on rotation curves.

1. Parallel machines: Parallel machine languages

SciTech Connect

Iannucci, R.A. )

1990-01-01

This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

2. Simplified Parallel Domain Traversal

SciTech Connect

Erickson III, David J

2011-01-01

Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

3. Parallel pipelining

SciTech Connect

Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

1995-09-01

In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

4. Data parallelism

SciTech Connect

Gorda, B.C.

1992-09-01

Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

5. Data parallelism

SciTech Connect

Gorda, B.C.

1992-09-01

Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmers task easier.

6. Direct Neutrino Mass Searches

VanDevender, B. A.

2009-12-01

Neutrino flavor oscillation experiments have demonstrated that the three Standard Model neutrino flavor eigenstates are mixed with three mass eigenstates whose mass eigenvalues are nondegenerate. The oscillation experiments measure the differences between the squares of the mass eigenvalues but tell us nothing about their absolute values. The unknown absolute neutrino mass scale has important implications in particle physics and cosmology. Beta decay endpoint measurements are presented as a model-independent method to measure the absolute neutrino mass. The Karlsruhe Tritium Neutrino Experiment (KATRIN) is explored in detail.

7. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

2014-12-01

We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic

8. Characterization of the non-uniqueness of used nuclear fuel burnup signatures through a Mesh-Adaptive Direct Search

Skutnik, Steven E.; Davis, David R.

2016-05-01

The use of passive gamma and neutron signatures from fission indicators is a common means of estimating used fuel burnup, enrichment, and cooling time. However, while characteristic fission product signatures such as 134Cs, 137Cs, 154Eu, and others are generally reliable estimators for used fuel burnup within the context where the assembly initial enrichment and the discharge time are known, in the absence of initial enrichment and/or cooling time information (such as when applying NDA measurements in a safeguards/verification context), these fission product indicators no longer yield a unique solution for assembly enrichment, burnup, and cooling time after discharge. Through the use of a new Mesh-Adaptive Direct Search (MADS) algorithm, it is possible to directly probe the shape of this "degeneracy space" characteristic of individual nuclides (and combinations thereof), both as a function of constrained parameters (such as the assembly irradiation history) and unconstrained parameters (e.g., the cooling time before measurement and the measurement precision for particular indicator nuclides). In doing so, this affords the identification of potential means of narrowing the uncertainty space of potential assembly enrichment, burnup, and cooling time combinations, thereby bounding estimates of assembly plutonium content. In particular, combinations of gamma-emitting nuclides with distinct half-lives (e.g., 134Cs with 137Cs and 154Eu) in conjunction with gross neutron counting (via 244Cm) are able to reasonably constrain the degeneracy space of possible solutions to a space small enough to perform useful discrimination and verification of fuel assemblies based on their irradiation history.

9. A search for anisotropy in the arrival directions of ultra high energy cosmic rays recorded at the Pierre Auger Observatory

SciTech Connect

Abreu, P.

2012-01-01

Observations of cosmic ray arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Veron-Cetty Veron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energy resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three methods can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. Using data taken from January 1, 2004 to July 31, 2010 we examined the 20, 30, ..., 110 highest energy events with a corresponding minimum energy threshold of about 51 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.

10. Process Simulation and Control Optimization of a Blast Furnace Using Classical Thermodynamics Combined to a Direct Search Algorithm

Harvey, Jean-Philippe; Gheribi, Aïmen E.

2013-12-01

Several numerical approaches have been proposed in the literature to simulate the behavior of modern blast furnaces: finite volume methods, data-mining models, heat and mass balance models, and classical thermodynamic simulations. Despite this, there is actually no efficient method for evaluating quickly optimal operating parameters of a blast furnace as a function of the iron ore composition, which takes into account all potential chemical reactions that could occur in the system. In the current study, we propose a global simulation strategy of a blast furnace, the 5-unit process simulation. It is based on classical thermodynamic calculations coupled to a direct search algorithm to optimize process parameters. These parameters include the minimum required metallurgical coke consumption as well as the optimal blast chemical composition and the total charge that simultaneously satisfy the overall heat and mass balances of the system. Moreover, a Gibbs free energy function for metallurgical coke is parameterized in the current study and used to fine-tune the simulation of the blast furnace. Optimal operating conditions and predicted output stream properties calculated by the proposed thermodynamic simulation strategy are compared with reference data found in the literature and have proven the validity and high precision of this simulation.

11. Radiopurity of CaWO{sub 4} crystals for direct dark matter search with CRESST and EURECA

SciTech Connect

Münster, A.; Sivers, M. v.; Erb, A.; Feilitzsch, F. v.; Gütlein, A.; Lanfranchi, J.-C.; Potzel, W.; Angloher, G.; Bento, A.; Hauff, D.; Petricca, F.; Pröbst, F.; Bucci, C.; Canonica, L.; Gorla, P.; Laubenstein, M.; Jochum, J.; Loebell, J.; Kraus, H.; Ortigoza, Y. E-mail: msivers@ph.tum.de; and others

2014-05-01

The direct dark matter search experiment CRESST uses scintillating CaWO{sub 4} single crystals as targets for possible WIMP scatterings. An intrinsic radioactive contamination of the crystals as low as possible is crucial for the sensitivity of the detectors. In the past CaWO{sub 4} crystals operated in CRESST were produced by institutes in Russia and the Ukraine. Since 2011 CaWO{sub 4} crystals have also been grown at the crystal laboratory of the Technische Universität München (TUM) to better meet the requirements of CRESST and of the future tonne-scale multi-material experiment EURECA. The radiopurity of the raw materials and of first TUM-grown crystals was measured by ultra-low background γ-spectrometry. Two TUM-grown crystals were also operated as low-temperature detectors at a test setup in the Gran Sasso underground laboratory. These measurements were used to determine the crystals' intrinsic α-activities which were compared to those of crystals produced at other institutes. The total α-activities of TUM-grown crystals as low as 1.23±0.06 mBq/kg were found to be significantly smaller than the activities of crystals grown at other institutes typically ranging between ∼ 15 mBq/kg and ∼ 35 mBq/kg.

12. Discussing direct search of dark matter particles in the minimal supersymmetric extension of the standard model with light neutralinos

SciTech Connect

Fornengo, N.; Scopel, S.; Bottino, A.

2011-01-01

We examine the status of light neutralinos in an effective minimal supersymmetric extension of the standard model at the electroweak scale which was considered in the past and discussed in terms of the available data of direct searches for dark matter particles. Our reanalysis is prompted by new measurements at the Tevatron and B factories which might potentially provide significant constraints on the minimal supersymmetric extension of the standard model. Here we examine in detail all these new data and show that the present published results from the Tevatron and B factories have only a mild effect on the original light-neutralino population. This population, which fits quite well the DAMA/LIBRA annual modulation data, would also agree with the preliminary results of CDMS, CoGeNT, and CRESST, should these data, which are at present only hints of excesses of events over the expected backgrounds, be interpreted as authentic signals of dark matter. For the neutralino mass we find a lower bound of 7-8 GeV. Our results differ from some recent conclusions by other authors because of a few crucial points which we try to single out and elucidate.

13. Searching for the Hydrogen Reionization Edge of the Universe at 5Parallels

Windhorst, Rogier

1999-07-01

We propose 36 parallel orbits {4-5 fields of 5-8 orbits each} to constrain the H Lyman-edge in emission that marks the transition from a neutral to a fully ionized IGM at a predicted zion eq5-15. This edge is due to recombination from the H Lyman series and Lyman continuum, and can be used to constrain zion, one of the most important unknown quantities in large scale structure and cosmology. Baltz Etal {1998} have shown that there is a rapid change in absorption around zion, which can leave a sharp edge signal in the spectrum of recombining Hydrogen with amplitude JnuLy eqfew* 10^-23 erg/cm^2/s/Hz/sr on top of an extragalactic background of JnuEBL eq5*10^-21, 3-4 dex below the zodiacal background. The model amplitude uncertainty is cge 0.5 dex. HST can constrain this signal in the sky background for Lambda 1Mu, but only with the STIS CCD, its 52*2'' long slit and G750L grating -- covering Lambda eq 6000-10, 300Angstrom and z eq4-7.5. We expect to set upper limits to various models for J u Ly, and possibly make a detection of JnuEBL, either of which will constrain zion. This is a difficult project that must use contemporaneous STIS calibrations at the bright side of each parallel orbit to reduce systematic errors to an absolute minimum. We must develop this technique properly to find out how to optimally develop instruments for NGST, to make this measurement with greater sensitivity and at higher redshifts {z eq7.5-15}.

14. Searching for the Hydrogen Reionization Edge of the Universe at 5Parallels

Windhorst, Rogier

1999-07-01

We propose 36 parallel orbits {4-5 fields of 5-8 orbits each} to constrain the H Lyman-edge in emission that marks the transition from a neutral to a fully ionized IGM at a predicted zion eq5-15. This edge is due to recombination from the H Lyman series and Lyman continuum, and can be used to constrain zion, one of the most important unknown quantities in large scale structure and cosmology. Baltz Etal {1998} have shown that there is a rapid change in absorption around zion, which can leave a sharp edge signal in the spectrum of recombining Hydrogen with amplitude JnuLy eqfew* 10^-23 erg/cm^2/s/Hz/sr on top of an extragalactic background of JnuEBL eq5*10^-21, 3-4 dex below the zodiacal background. The model amplitude uncertainty is cge 0.5 dex. HST can constrain this signal in the sky background for Lambda 1Mu, but only with the STIS CCD, its 52*2'' long slit and G750L grating -- covering Lambda eq 6000-10, 300Angstrom and z eq4-7.5. We expect to set upper limits to various models for J uLy, and possibly make a detection of JnuEBL, either of which will constrain zion. This is a difficult project that must use contemporaneous STIS calibrations at the bright side of each parallel orbit to reduce systematic errors to an absolute minimum. We must develop this technique properly to find out how to optimally develop instruments for NGST, to make this measurement with greater sensitivity and at higher redshifts {z eq7.5-15}.

15. Automatic Multilevel Parallelization Using OpenMP

NASA Technical Reports Server (NTRS)

Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

2002-01-01

In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

16. Universal approximators for multi-objective direct policy search in water reservoir management problems: a comparative analysis

Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca

2014-05-01

The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower

SciTech Connect

Jacobi, Michael R

2012-08-01

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

18. Is the Unemployment Rate of Women Too Low? A Direct Test of the Economic Theory of Job Search.

ERIC Educational Resources Information Center

Sandell, Steven H.

To test the economic theory of job search and the rationality of job search behavior by unemployed married women, the importance of reservation wages (or wages requested for employment) was studied for its effect on the duration of unemployment and its relationship to the subsequent rate of pay upon reemployment. Models were established to explain…

19. Discovering Common Ground: How Future Search Conferences Bring People Together To Achieve Breakthrough Innovation, Empowerment, Shared Vision, and Collaborative Action.

ERIC Educational Resources Information Center

Weisbord, Marvin R.; And Others

This book contains 35 papers about planning and holding future search conferences, as well as their benefits and likely future directions. The following papers are included: "Applied Common Sense" (Weisbord); "Inventing the Search Conference" (Weisbord); "Building Collaborative Communities" (Schindler-Rainman, Lippitt); "Parallel Paths to…

20. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

NASA Technical Reports Server (NTRS)

Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

2000-01-01

Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

1. Demonstrating Forces between Parallel Wires.

ERIC Educational Resources Information Center

Baker, Blane

2000-01-01

Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

2. Parallel hierarchical global illumination

SciTech Connect

Snell, Q.O.

1997-10-08

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

3. The application of the Luus-Jaakola direct search method to the optimization of a hybrid renewable energy system

Jatzeck, Bernhard Michael

2000-10-01

The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two near Edmonton, Alberta, and one each near Lethbridge, Alberta, Victoria, British Columbia, and Delta, British Columbia. The optimization algorithm used hourly data for the load demand, WTG output power/area, and PV module output power. These hourly data were in two sets: seasonal (summer and winter values separated) and total (summer and winter values combined). The costs for the WTG's, PV modules, batteries, and auxiliary generator fuel were full market values. To examine the effects of price discounts or tax incentives, these values were lowered to 25% of the full costs for the energy sources and two-thirds of the full cost for agricultural fuel. Annual costs for a renewable energy system depended upon the load, location, component costs, and which data set (seasonal or total) was used. For one Edmonton load, the cost for a renewable energy system consisting of 27.01 m2 of WTG area, 14 PV modules, and 18 batteries (full price, total data set) was 6873/year. For Lethbridge, a system with 22.85 m2 of WTG area, 47 PV modules, and 5 batteries (reduced prices, seasonal data set) cost 2913/year. The performance of renewable energy systems based on the obtained results was tested in a simulation using load and weather data for selected days. Test results for one Edmonton load showed that the simulations for most of the systems examined ran for at least 17 hours per day before failing due to either an excessive load on the auxiliary generator or a battery constraint being violated. Additional testing indicated that increasing the generator

4. Extracting constraints from direct detection searches of supersymmetric dark matter in the light of null results from the LHC in the squark sector

Riffard, Q.; Mayet, F.; Bélanger, G.; Genest, M.-H.; Santos, D.

2016-02-01

The comparison of the results of direct detection of dark matter, obtained with various target nuclei, requires model-dependent, or even arbitrary, assumptions. Indeed, to draw conclusions either the spin-dependent (SD) or the spin-independent (SI) interaction has to be neglected. In the light of the null results from supersymmetry searches at the LHC, the squark sector is pushed to high masses. We show that for a squark sector at the TeV scale, the framework used to extract constraints from direct detection searches can be redefined as the number of free parameters is reduced. Moreover, the correlation observed between SI and SD proton cross sections constitutes a key issue for the development of the next generation of dark matter detectors.

5. Parallel Information Processing.

ERIC Educational Resources Information Center

Rasmussen, Edie M.

1992-01-01

Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

NASA Technical Reports Server (NTRS)

Park, Michael A.; Darmofal, David L.

2008-01-01

An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

7. Functional Interaction between Right Parietal and Bilateral Frontal Cortices during Visual Search Tasks Revealed Using Functional Magnetic Imaging and Transcranial Direct Current Stimulation

PubMed Central

Ellison, Amanda; Ball, Keira L.; Moseley, Peter; Dowsett, James; Smith, Daniel T.; Weis, Susanne; Lane, Alison R.

2014-01-01

The existence of a network of brain regions which are activated when one undertakes a difficult visual search task is well established. Two primary nodes on this network are right posterior parietal cortex (rPPC) and right frontal eye fields. Both have been shown to be involved in the orientation of attention, but the contingency that the activity of one of these areas has on the other is less clear. We sought to investigate this question by using transcranial direct current stimulation (tDCS) to selectively decrease activity in rPPC and then asking participants to perform a visual search task whilst undergoing functional magnetic resonance imaging. Comparison with a condition in which sham tDCS was applied revealed that cathodal tDCS over rPPC causes a selective bilateral decrease in frontal activity when performing a visual search task. This result demonstrates for the first time that premotor regions within the frontal lobe and rPPC are not only necessary to carry out a visual search task, but that they work together to bring about normal function. PMID:24705681

8. Automatic Multilevel Parallelization Using OpenMP

NASA Technical Reports Server (NTRS)

Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

2002-01-01

In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

9. Research on parallel algorithm for sequential pattern mining

Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

2008-03-01

Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

10. Parallel fast gauss transform

SciTech Connect

Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

2010-01-01

We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

11. The Higgs boson in the Standard Model theoretical constraints and a direct search in the wh channel at the Tevatron

SciTech Connect

Huske, Nils Kristian

2010-09-10

We have presented results in two different yet strongly linked aspects of Higgs boson physics. We have learned about the importance of the Higgs boson for the fate of the Standard Model, being either only a theory limited to explaining phenomena at the electroweak scale or, if the Higgs boson lies within a mass range of 130 < mH < 160 GeV the SM would remain a self consistent theory up to highest energy scales O(mPl). This could have direct implications on theories of cosmological inflation using the Higgs boson as the particle giving rise to inflation in the very early Universe, if it couples non-minimally to gravity, an effect that would only become significant at very high energies. After understanding the immense meaning of proving whether the Higgs boson exists and if so, at which mass, we have presented a direct search for a Higgs boson in associated production with a W boson in a mass range 100 < mH < 150 GeV. A light Higgs boson is favored regarding constraints from electroweak precision measurements. As a single analysis is not yet sensitive for an observation of the Higgs boson using 5.3 fb-1 of Tevatron data, we set limits on the production cross section times branching ratio. At the Tevatron, however, we are able to combine the sensitivity of our analyses not only across channels or analyses at a single experiment but also across both experiments, namely CDF and D0. This yields to the so-called Tevatron Higgs combination which, in total, combines 129 analyses from both experiments with luminosities of up to 6.7 fb-1. The results of a previous Tevatron combination led to the first exclusion of possible Higgs boson masses since the LEP exclusion in 2001. The latest Tevatron combination from July 2010 can be seen in Fig. 111 and limits compared to the Standard Model expectation are listed in Table 23. It excludes a SM Higgs boson in the regions of 100 < mH < 109 GeV as well as 158 < m

12. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

DOE PAGESBeta

Haut, T. S.; Babb, T.; Martinsson, P. G.; Wingate, B. A.

2015-06-16

Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

13. Sequential/parallel production of potential Malaria vaccines--A direct way from single batch to quasi-continuous integrated production.

PubMed

Luttmann, Reiner; Borchert, Sven-Oliver; Mueller, Christian; Loegering, Kai; Aupert, Florian; Weyand, Stephan; Kober, Christian; Faber, Bart; Cornelissen, Gesine

2015-11-10

An intensification of pharmaceutical protein production processes can be achieved by the integration of unit operations and application of recurring sequences of all biochemical process steps. Within optimization procedures each individual step as well as the overall process has to be in the focus of scientific interest. This paper includes a description of the development of a fully automated production plant, starting with a two step upstream followed by a four step downstream line, including cell clarification, broth cleaning with microfiltration, product concentration with ultrafiltration and purification with column chromatography. Recursive production strategies are developed where a cell breeding, the protein production and the whole downstream is operated in series but also in parallel, each main operation shifted by one day. The quality and reproducibility of the recursive protein expression is monitored on-line by Golden Batch and this is controlled by Model Predictive Multivariate Control (MPMC). As a demonstration process the production of potential Malaria vaccines with Pichia pastoris is under investigation. PMID:25736485

14. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

SciTech Connect

Haut, T. S.; Babb, T.; Martinsson, P. G.; Wingate, B. A.

2015-06-16

Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existing methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.

15. Parallel execution model for Prolog

SciTech Connect

Fagin, B.S.

1987-01-01

One candidate language for parallel symbolic computing is Prolog. Numerous ways for executing Prolog in parallel have been proposed, but current efforts suffer from several deficiencies. Many cannot support fundamental types of concurrency in Prolog. Other models are of purely theoretical interest, ignoring implementation costs. Detailed simulation studies of execution models are scare; at present little is known about the costs and benefits of executing Prolog in parallel. In this thesis, a new parallel execution model for Prolog is presented: the PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR-parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented, and compilation to the PPP abstract instruction set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

16. A direct comparison of CellSearch and ISET for circulating tumour-cell detection in patients with metastatic carcinomas

PubMed Central

Farace, F; Massard, C; Vimond, N; Drusch, F; Jacques, N; Billiot, F; Laplanche, A; Chauchereau, A; Lacroix, L; Planchard, D; Le Moulec, S; André, F; Fizazi, K; Soria, J C; Vielh, P

2011-01-01

Background: Circulating tumour cells (CTCs) can provide information on patient prognosis and treatment efficacy. However, there is no universal method to detect CTC currently available. Here, we compared the performance of two CTC detection systems based on the expression of the EpCAM antigen (CellSearch assay) or on cell size (ISET assay). Methods: Circulating tumour cells were enumerated in 60 patients with metastatic carcinomas of breast, prostate and lung origins using CellSearch according to the manufacturer's protocol and ISET by studying cytomorphology and immunolabelling with anti-cytokeratin or lineage-specific antibodies. Results: Concordant results were obtained in 55% (11 out of 20) of the patients with breast cancer, in 60% (12 out of 20) of the patients with prostate cancer and in only 20% (4 out of 20) of lung cancer patients. Conclusion: Our results highlight important discrepancies between the numbers of CTC enumerated by both techniques. These differences depend mostly on the tumour type. These results suggest that technologies limiting CTC capture to EpCAM-positive cells, may present important limitations, especially in patients with metastatic lung carcinoma. PMID:21829190

17. A scalable 2-D parallel sparse solver

SciTech Connect

Kothari, S.C.; Mitra, S.

1995-12-01

Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

18. Computer-Aided Parallelizer and Optimizer

NASA Technical Reports Server (NTRS)

Jin, Haoqiang

2011-01-01

The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

19. Fully passive-alignment pluggable compact parallel optical interconnection modules based on a direct-butt-coupling structure for fiber-optic applications

Lim, Kwon-Seob; Park, Hyoung-Jun; Kang, Hyun Seo; Kim, Young Sun; Jang, Jae-Hyung

2016-02-01

A low-cost packaging method utilizing a fully passive optical alignment and surface-mounting method is demonstrated for pluggable compact and slim multichannel optical interconnection modules using a VCSEL/PIN-PD chip array. The modules are based on a nonplanar bent right-angle electrical signal path on a silicon platform and direct-butt-optical coupling without a bulky and expensive microlens array. The measured optical direct-butt-coupling efficiencies of each channel without any bulky optics are as high as 33% and 95% for the transmitter and receiver, respectively. Excellent lateral optical alignment tolerance of larger than 60 μm for both the transmitter and receiver module significantly reduces the manufacturing and material costs as well as the packaging time. The clear eye diagrams, extinction ratios higher than 8 dB at 10.3 Gbps for the transmitter module, and receiver sensitivity of better than -13.1 dBm at 10.3 Gbps and a bit error rate of 10-12 for all channels are demonstrated. Considering that the optical output power of the transmitter is greater than 0 dBm, the module has a sufficient power margin of about 13 dB for 10.3 Gbps operations for all channels.

20. Search for direct top squark pair production in events with a boson, -jets and missing transverse momentum in TeV collisions with the ATLAS detector

2014-06-01

A search is presented for direct top squark pair production using events with at least two leptons including a same-flavour opposite-sign pair with invariant mass consistent with the boson mass, jets tagged as originating from -quarks and missing transverse momentum. The analysis is performed with proton-proton collision data at collected with the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of 20.3 fb. No excess beyond the Standard Model expectation is observed. Interpretations of the results are provided in models based on the direct pair production of the heavier top squark state () followed by the decay to the lighter top squark state () via , and for pair production in natural gauge-mediated supersymmetry breaking scenarios where the neutralino () is the next-to-lightest supersymmetric particle and decays producing a boson and a gravitino () via the process.

1. Anisotropic diffusion of electrons in liquid xenon with application to improving the sensitivity of direct dark matter searches

SciTech Connect

Sorensen, P

2011-02-14

Electron diffusion in a liquid xenon time projection chamber has recently been used to infer the z coordinate of a particle interaction, from the width of the electron signal. The goal of this technique is to reduce the background event rate by discriminating edge events from bulk events. Analyses of dark matter search data which employ it would benefit from increased longitudinal electron diffusion. We show that a significant increase is expected if the applied electric field is decreased. This observation is trivial to implement but runs contrary to conventional wisdom and practice. We also extract a first measurement of the longitudinal diffusion coefficient, and confirm the expectation that electron diffusion in liquid xenon is highly anisotropic under typical operating conditions.

2. Status of the MSSM Higgs sector using global analysis and direct search bounds, and future prospects at the High Luminosity LHC

Bhattacherjee, Biplob; Chakraborty, Amit; Choudhury, Arghya

2015-11-01

In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (α ) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays Bs→μ+μ- and b →s γ are also considered. We find that low MA(≲350 ) and high tan β (≳25 ) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle α ˜0.1 - 0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H /A ) and charged Higgs boson (H±) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan β with light additional Higgses (mass ≤600 GeV ) are unconstrained by the data, while the regions with tan β >20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan β ≤20 at the high luminosity run of LHC are also discussed, giving special attention to the H →hh , H /A →t t ¯ and H /A →τ+τ- decay modes.

3. Template based parallel checkpointing in a massively parallel computer system

DOEpatents

Archer, Charles Jens; Inglett, Todd Alan

2009-01-13

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

4. Special parallel processing workshop

SciTech Connect

1994-12-01

This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

5. EGO-1, a Putative RNA-Directed RNA Polymerase, Promotes Germline Proliferation in Parallel With GLP-1/Notch Signaling and Regulates the Spatial Organization of Nuclear Pore Complexes and Germline P Granules in Caenorhabditis elegans

PubMed Central

Vought, Valarie E.; Ohmachi, Mitsue; Lee, Min-Ho; Maine, Eleanor M.

2005-01-01

Caenorhabditis elegans EGO-1, a putative cellular RNA-directed RNA polymerase, promotes several aspects of germline development, including proliferation, meiosis, and gametogenesis, and ensures a robust response to RNA interference. In C. elegans, GLP-1/Notch signaling from the somatic gonad maintains a population of proliferating germ cells, while entry of germ cells into meiosis is triggered by the GLD-1 and GLD-2 pathways. GLP-1 signaling prevents germ cells from entering meiosis by inhibiting GLD-1 and GLD-2 activity. We originally identified the ego-1 gene on the basis of a genetic interaction with glp-1. Here, we investigate the role of ego-1 in germline proliferation. Our data indicate that EGO-1 does not positively regulate GLP-1 protein levels or GLP-1 signaling activity. Moreover, GLP-1 signaling does not positively regulate EGO-1 activity. EGO-1 does not inhibit expression of GLD-1 protein in the distal germline. Instead, EGO-1 acts in parallel with GLP-1 signaling to influence the proliferation vs. meiosis fate choice. Moreover, EGO-1 and GLD-1 act in parallel to ensure germline health. Finally, the size and distribution of nuclear pore complexes and perinuclear P granules are altered in the absence of EGO-1, effects that disrupt germ cell biology per se and probably limit germline growth. PMID:15911573

6. EGO-1, a putative RNA-directed RNA polymerase, promotes germline proliferation in parallel with GLP-1/notch signaling and regulates the spatial organization of nuclear pore complexes and germline P granules in Caenorhabditis elegans.

PubMed

Vought, Valarie E; Ohmachi, Mitsue; Lee, Min-Ho; Maine, Eleanor M

2005-07-01

Caenorhabditis elegans EGO-1, a putative cellular RNA-directed RNA polymerase, promotes several aspects of germline development, including proliferation, meiosis, and gametogenesis, and ensures a robust response to RNA interference. In C. elegans, GLP-1/Notch signaling from the somatic gonad maintains a population of proliferating germ cells, while entry of germ cells into meiosis is triggered by the GLD-1 and GLD-2 pathways. GLP-1 signaling prevents germ cells from entering meiosis by inhibiting GLD-1 and GLD-2 activity. We originally identified the ego-1 gene on the basis of a genetic interaction with glp-1. Here, we investigate the role of ego-1 in germline proliferation. Our data indicate that EGO-1 does not positively regulate GLP-1 protein levels or GLP-1 signaling activity. Moreover, GLP-1 signaling does not positively regulate EGO-1 activity. EGO-1 does not inhibit expression of GLD-1 protein in the distal germline. Instead, EGO-1 acts in parallel with GLP-1 signaling to influence the proliferation vs. meiosis fate choice. Moreover, EGO-1 and GLD-1 act in parallel to ensure germline health. Finally, the size and distribution of nuclear pore complexes and perinuclear P granules are altered in the absence of EGO-1, effects that disrupt germ cell biology per se and probably limit germline growth. PMID:15911573

7. Parallelization of the SIR code

Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jurčak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

8. Utilizing parallel optimization in computational fluid dynamics

Kokkolaras, Michael

1998-12-01

General problems of interest in computational fluid dynamics are investigated by means of optimization. Specifically, in the first part of the dissertation, a method of optimal incremental function approximation is developed for the adaptive solution of differential equations. Various concepts and ideas utilized by numerical techniques employed in computational mechanics and artificial neural networks (e.g. function approximation and error minimization, variational principles and weighted residuals, and adaptive grid optimization) are combined to formulate the proposed method. The basis functions and associated coefficients of a series expansion, representing the solution, are optimally selected by a parallel direct search technique at each step of the algorithm according to appropriate criteria; the solution is built sequentially. In this manner, the proposed method is adaptive in nature, although a grid is neither built nor adapted in the traditional sense using a-posteriori error estimates. Variational principles are utilized for the definition of the objective function to be extremized in the associated optimization problems, ensuring that the problem is well-posed. Complicated data structures and expensive remeshing algorithms and systems solvers are avoided. Computational efficiency is increased by using low-order basis functions and concurrent computing. Numerical results and convergence rates are reported for a range of steady-state problems, including linear and nonlinear differential equations associated with general boundary conditions, and illustrate the potential of the proposed method. Fluid dynamics applications are emphasized. Conclusions are drawn by discussing the method's limitations, advantages, and possible extensions. The second part of the dissertation is concerned with the optimization of the viscous-inviscid-interaction (VII) mechanism in an airfoil flow analysis code. The VII mechanism is based on the concept of a transpiration velocity

9. Predicting performance of parallel computations

NASA Technical Reports Server (NTRS)

Mak, Victor W.; Lundstrom, Stephen F.

1990-01-01

An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

10. A parallel execution model for Prolog

SciTech Connect

Fagin, B.

1987-01-01

In this thesis a new parallel execution model for Prolog is presented: The PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR- parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented and compilation to the PPP abstract instructions set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

11. Fits to the Fermi-LAT GeV excess with right-handed sneutrino dark matter: Implications for direct and indirect dark matter searches and the LHC

Cerdeño, D. G.; Peiró, M.; Robles, S.

2015-06-01

We show that the right-handed (RH) sneutrino in the next-to-minimal supersymmetric standard model can account for the observed excess in the Fermi-LAT spectrum of gamma rays from the Galactic center, while fulfilling all the current experimental constraints from the LHC as well as from direct and indirect dark matter searches. We have explored the parameter space of this scenario, computed the gamma-ray spectrum for each phenomenologically viable solution and then performed a χ2 fit to the excess. Unlike previous studies based on model-independent interpretations, we have taken into account the full annihilation spectrum, without assuming pure annihilation channels. Furthermore, we have incorporated limits from direct detection experiments, LHC bounds and also the constraints from Fermi-LAT on dwarf spheroidal galaxies and gamma-ray spectral lines. In addition, we have estimated the effect of the most recent Fermi-LAT reprocessed data (pass 8). In general, we obtain good fits to the Galactic center excess (GCE) when the RH sneutrino annihilates mainly into pairs of light singletlike scalar or pseudoscalar Higgs bosons that subsequently decay in flight, producing four-body final states and spectral features that improve the goodness of the fit at large energies. The best fit (χ2=20.8 ) corresponds to a RH sneutrino with a mass of 64 GeV which annihilates preferentially into a pair of light singletlike pseudoscalar Higgs bosons (with masses of order 60 GeV). Besides, we have analyzed other channels that also provide good fits to the excess. Finally, we discuss the implications for direct and indirect detection searches paying special attention to the possible appearance of gamma-ray spectral features in near future Fermi-LAT analyses, as well as deviations from the Standard Model-like Higgs properties at the LHC. Remarkably, many of the scenarios that fit the GCE can also be probed by these other complementary techniques.

12. A join algorithm for combining AND parallel solutions in AND/OR parallel systems

SciTech Connect

Ramkumar, B. ); Kale, L.V. )

1992-02-01

When two or more literals in the body of a Prolog clause are solved in (AND) parallel, their solutions need to be joined to compute solutions for the clause. This is often a difficult problem in parallel Prolog systems that exploit OR and independent AND parallelism in Prolog programs. In several AND/OR parallel systems proposed recently, this problem is side-stepped at the cost of unexploited OR parallelism in the program, in part due to the complexity of the backtracking algorithm beneath AND parallel branches. In some cases, the data dependency graphs used by these systems cannot represent all the exploitable independent AND parallelism known at compile time. In this paper, we describe the compile time analysis for an optimized join algorithm for supporting independent AND parallelism in logic programs efficiently without leaving and OR parallelism unexploited. We then discuss how this analysis can be used to yield very efficient runtime behavior. We also discuss problems associated with a tree representation of the search space when arbitrarily complex data dependency graphs are permitted. We describe how these problems can be resolved by mapping the search space onto data dependency graphs themselves. The algorithm has been implemented in a compiler for parallel Prolog based on the reduce-OR process model. The algorithm is suitable for the implementation of AND/OR systems on both shared and nonshared memory machines. Performance on benchmark programs.

13. Numerical analysis of electrical defibrillation. The parallel approach.

PubMed

Ng, K T; Hutchinson, S A; Gao, S

1995-01-01

Numerical modeling offers a viable tool for studying electrical defibrillation, allowing the behavior of field quantities to be observed easily as the different system parameters are varied. One numerical technique, namely the finite-element method, has been found particularly effective for modeling complex thoracic anatomies. However, an accurate finite-element model of the thorax often requires a large number of elements and nodes, leading to a large set of equations that cannot be solved effectively with the computational power of conventional computers. This is especially true if many finite-element solutions need to be achieved within a reasonable time period (eg, electrode configuration optimization). In this study, the use of massively parallel computers to provide the memory and reduction in solution time for solving these large finite-element problems is discussed. Both the uniform and unstructured grid approaches are considered. Algorithms that allow efficient mapping of uniform and unstructured grids to data-parallel and message-passing parallel computers are discussed. An automatic iterative procedure for electrode configuration optimization is presented. The procedure is based on the minimization of an objective function using the parallel direct search technique. Computational performance results are presented together with simulation results. PMID:8656104

14. Low-background balloon-borne direct search for ionizing massive particles as a component of the dark galactic halo matter

McGuire, Patrick Charles

A dark matter (DM) search experiment was flown on the IMAX balloon payload, which tested the hypothesis that a minor component of the dark matter in the Galactic halo is composed of ionizing (dE/dx greater than 1 MeV/g/cm2 or sigma greater than 2 x 10-20 sq cm supermassive particles (mx is an element of (104, 1012)GeV/c2 that cannot penetrate the atmosphere due to their low velocities (beta belongs to (0.0003, 0.00025)). The DM search experiment consisted of a delayed coincidence between four approximately 2400 cm2 plastic scintillation detectors, with a total acceptance of approximately 100 cm2 sr. In order to search for ultra-slow particles which do not slow down in the IMAX telescope, the experiment contained TDCs which measured the time-delays Ti,i+1is an element of (0.3, 14.0)microseconds between hits in a successive counters to approximately 1 percent precision. Using the first 5 hours of data at float altitude (5 g/sq cm residual atmosphere), we observed approximately 5 candidate non-slowing dark matter events, consistent with the background from accidental coincidences of 4 events. This implies that the DM flux is less than 6.5 x 10-6 cm-2s-1sr-1 (95 percent C.L.). Similar results were also obtained for particles which slow down in the counter telescope. This experiment effectively closes much of a previously unconstrained 'window' in the mass/cross-section joint parameter space for massive particles as the dominant halo DM, and implies that for certain regions of this parameter space massive particles cannot be more than one part in 105 by mass of all the DM. These results can also directly constrain 'light' magnetic monopoles and neutra CHAMPs in a previously unconstrained mass region mx belongs to (106, 109) GeV.

15. Parallel rendering techniques for massively parallel visualization

SciTech Connect

Hansen, C.; Krogh, M.; Painter, J.

1995-07-01

As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPPs abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

16. Calculation of geometric phases in electric dipole searches with trapped spin-1/2 particles based on direct solution of the Schrödinger equation

Steyerl, A.; Kaufman, C.; Müller, G.; Malik, S. S.; Desai, A. M.; Golub, R.

2014-05-01

Pendlebury etal . [Phys. Rev. A 70, 032102 (2004), 10.1103/PhysRevA.70.032102] were the first to investigate the role of geometric phases in searches for an electric dipole moment (EDM) of elementary particles based on Ramsey-separated oscillatory field magnetic resonance with trapped ultracold neutrons and comagnetometer atoms. Their work was based on the Bloch equation and later work using the density matrix corroborated the results and extended the scope to describe the dynamics of spins in general fields and in bounded geometries. We solve the Schrödinger equation directly for cylindrical trap geometry and obtain a full description of EDM-relevant spin behavior in general fields, including the short-time transients and vertical spin oscillation in the entire range of particle velocities. We apply this method to general macroscopic fields and to the field of a microscopic magnetic dipole.

17. Searching for hard X-ray directivity during the rise, peak, and decay phases of solar flares

NASA Technical Reports Server (NTRS)

Li, Peng

1994-01-01

We have identified 72 large solar flares (peak counting rates more than 1000 counts/s) observed by Hard X-ray Burst Spectroscopy (HXRBS) on-board the Solar Maximum Mission (SMM). Using a database of these flares, we have studied hard X-ray (50-850 keV) spectral center-to-limb variation and its evolution with time. The major results are the following: (1) During the rise phase, the center-to-limb spectral variation is small, with a hardness of delta delta = 0.02 +/- 0.25, and a statistical significance of 0.1 sigma. (2) During the peak phase, the center-to-limb variation is delta delta = 0.13 +/- 0.13, with a statistical significance of 1 sigma. (3) During the decay phase, the center-to-limb variation changes to softening. The softness is relatively large with delta delta = -0.25 +/- 0.21, and a statistical significance of 1.2 sigma. (4) The linear least-squares fits to the spectral center-to-limb variations do not have slopes significantly different from zero during all those three phases. (5) The center events and limb events spectral distributions are shown to be not different by using Kolmogorov-Smirnov two-samples test. (6) The fraction of events detected near the limb is marginally consistent with that expected from isotropically emitting flares. (7) On average, flares evolve as soft-hard-soft. These results suggest that there is no statistically significant evidence for hard X-ray directivity during the rise, peak, and decay phases of solar flares. The hard X-ray radiation pattern at those energies is almost isotropic during all those phases. This lack of directivity (or anisotropy) found in this study is not in agreement with the results discovered by Vestrand et al. (1987) in which they found energetic photon source is anisotropic, using SMM Gamma-Ray Spectrometer (GRS) data at a much higher energy band of 0.3-1 MeV. If we want to interpret the results of Vestrand et al. (1987) and our present results in a self-consistent way, we must conclude that at

18. Multi-Target-Directed Ligands and other Therapeutic Strategies in the Search of a Real Solution for Alzheimer's Disease

PubMed Central

Agis-Torres, Angel; Sölhuber, Monica; Fernandez, Maria; Sanchez-Montero, J.M.

2014-01-01

The lack of an adequate therapy for Alzheimer's Disease (AD) contributes greatly to the continuous growing amount of papers and reviews, reflecting the important efforts made by scientists in this field. It is well known that AD is the most common cause of dementia, and up-to-date there is no prevention therapy and no cure for the disease, which contrasts with the enormous efforts put on the task. On the other hand many aspects of AD are currently debated or even unknown. This review offers a view of the current state of knowledge about AD which includes more relevant findings and processes that take part in the disease; it also shows more relevant past, present and future research on therapeutic drugs taking into account the new paradigm “Multi-Target-Directed Ligands” (MTDLs). In our opinion, this paradigm will lead from now on the research toward the discovery of better therapeutic solutions, not only in the case of AD but also in other complex diseases. This review highlights the strategies followed by now, and focuses other emerging targets that should be taken into account for the future development of new MTDLs. Thus, the path followed in this review goes from the pathology and the processes involved in AD to the strategies to consider in on-going and future researches. PMID:24533013

19. B physics: measurement of partial widths and search for direct cp violation in d0 meson decays

SciTech Connect

Acosta, D.; The CDF Collaboration

2005-04-04

We present a measurement of relative partial widths and decay rate CP asymmetries in K{sup -}K{sup +} and {pi}{sup -}{pi}{sup +} decays of D{sup 0} mesons produced in p{bar p} collisions at {radical}s = 1.96TeV. We use a sample of 2 x 10{sup 5} D*{sup +} {yields} D{sup 0}{pi}{sup +} (and charge conjugate) decays with the D{sup 0} decaying to K{sup -}{pi}{sup +}, K{sup -}K{sup +}, and {pi}{sup -}{pi}{sup +}, corresponding to 123 pb{sup -1} of data collected by the Collider Detector at Fermilab II experiment at the Fermilab Tevatron collider. No significant direct CP violation is observed. We measure {Lambda}(D{sup 0} {yields} K{sup -}K{sup +})/{Lambda}(D{sup 0} {yields} K{sup -}{pi}{sup +}) = 0.0992 {+-} 0.0011 {+-} 0.0012, {Lambda}(D{sup 0} {yields} {pi}{sup -}{pi}{sup +})/{Lambda}(D{sup 0} {yields} K{sup -}{pi}{sup +}) = 0.03594 {+-} 0.00054 {+-} 0.00040, A{sub CP} (K{sup -}K{sup +}) = (2.0 {+-} 1.2 {+-} 0.6)%, and A{sub CP} ({pi}{sup -}{pi}{sup +}) = (1.0 {+-} 1.3 {+-} 0.6) %, where, in all cases, the first uncertainty is statistical and the second is systematic.

20. Improved task scheduling for parallel simulations. Master's thesis

SciTech Connect

McNear, A.E.

1991-12-01

The objective of this investigation is to design, analyze, and validate the generation of optimal schedules for simulation systems. Improved performance in simulation execution times can greatly improve the return rate of information provided by such simulations resulting in reduced development costs of future computer/electronic systems. Optimal schedule generation of precedence-constrained task systems including iterative feedback systems such as VHDL or war gaming simulations for execution on a parallel computer is known to be N P-hard. Efficiently parallelizing such problems takes full advantage of present computer technology to achieve a significant reduction in the search times required. Unfortunately, the extreme combinatoric 'explosion' of possible task assignments to processors creates an exponential search space prohibitive on any computer for search algorithms which maintain more than one branch of the search graph at any one time. This work develops various parallel modified backtracking (MBT) search algorithms for execution on an iPSC/2 hypercube that bound the space requirements and produce an optimally minimum schedule with linear speed-up. The parallel MBT search algorithm is validated using various feedback task simulation systems which are scheduled for execution on an iPSC/2 hypercube. The search time, size of the enumerated search space, and communications overhead required to ensure efficient utilization during the parallel search process are analyzed. The various applications indicated appreciable improvement in performance using this method.

1. Integrated Task and Data Parallel Programming

NASA Technical Reports Server (NTRS)

Grimshaw, A. S.

1998-01-01

This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

Fanti, V.; Marzeddu, R.; Randaccio, P.

2003-08-01

A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

3. Parallel tridiagonal equation solvers

NASA Technical Reports Server (NTRS)

Stone, H. S.

1974-01-01

Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

4. MPP parallel forth

NASA Technical Reports Server (NTRS)

Dorband, John E.

1987-01-01

Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

5. Direct glycan structure determination of intact N-linked glycopeptides by low-energy collision-induced dissociation tandem mass spectrometry and predicted spectral library searching.

PubMed

Pai, Pei-Jing; Hu, Yingwei; Lam, Henry

2016-08-31

Intact glycopeptide MS analysis to reveal site-specific protein glycosylation is an important frontier of proteomics. However, computational tools for analyzing MS/MS spectra of intact glycopeptides are still limited and not well-integrated into existing workflows. In this work, a new computational tool which combines the spectral library building/searching tool, SpectraST (Lam et al. Nat. Methods2008, 5, 873-875), and the glycopeptide fragmentation prediction tool, MassAnalyzer (Zhang et al. Anal. Chem.2010, 82, 10194-10202) for intact glycopeptide analysis has been developed. Specifically, this tool enables the determination of the glycan structure directly from low-energy collision-induced dissociation (CID) spectra of intact glycopeptides. Given a list of possible glycopeptide sequences as input, a sample-specific spectral library of MassAnalyzer-predicted spectra is built using SpectraST. Glycan identification from CID spectra is achieved by spectral library searching against this library, in which both m/z and intensity information of the possible fragmentation ions are taken into consideration for improved accuracy. We validated our method using a standard glycoprotein, human transferrin, and evaluated its potential to be used in site-specific glycosylation profiling of glycoprotein datasets from LC-MS/MS. In addition, we further applied our method to reveal, for the first time, the site-specific N-glycosylation profile of recombinant human acetylcholinesterase expressed in HEK293 cells. For maximum usability, SpectraST is developed as part of the Trans-Proteomic Pipeline (TPP), a freely available and open-source software suite for MS data analysis. PMID:27506355

6. Parallel flow diffusion battery

DOEpatents

Yeh, H.C.; Cheng, Y.S.

1984-01-01

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

7. Parallel flow diffusion battery

DOEpatents

Yeh, Hsu-Chi; Cheng, Yung-Sung

1984-08-07

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

8. Fast data parallel polygon rendering

SciTech Connect

Ortega, F.A.; Hansen, C.D.

1993-09-01

This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

9. Message passing with parallel queue traversal

DOEpatents

Underwood, Keith D.; Brightwell, Ronald B.; Hemmert, K. Scott

2012-05-01

In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.

10. Development of ballistic hot electron emitter and its applications to parallel processing: active-matrix massive direct-write lithography in vacuum and thin-film deposition in solutions

Koshida, Nobuyoshi; Kojima, Akira; Ikegami, Naokatsu; Suda, Ryutaro; Yagi, Mamiko; Shirakashi, Junichi; Miyaguchi, Hiroshi; Muroyama, Masanori; Yoshida, Shinya; Totsu, Kentaro; Esashi, Masayoshi

2015-07-01

Making the best use of the characteristic features in nanocrystalline Si (nc-Si) ballistic hot electron source, an alternative lithographic technology is presented based on two approaches: physical excitation in vacuum and chemical reduction in solutions. The nc-Si cold cathode is composed of a thin metal film, an nc-Si layer, an n+-Si substrate, and an ohmic back contact. Under a biased condition, energetic electrons are uniformly and directionally emitted through the thin surface electrodes. In vacuum, this emitter is available for active-matrix drive massive parallel lithography. Arrayed 100×100 emitters (each emitting area: 10×10 μm2) are fabricated on a silicon substrate by a conventional planar process, and then every emitter is bonded with the integrated driver using through-silicon-via interconnect technology. Another application is the use of this emitter as an active electrode supplying highly reducing electrons into solutions. A very small amount of metal-salt solutions is dripped onto the nc-Si emitter surface, and the emitter is driven without using any counter electrodes. After the emitter operation, thin metal and elemental semiconductors (Si and Ge) films are uniformly deposited on the emitting surface. Spectroscopic surface and compositional analyses indicate that there are no significant contaminations in deposited thin films.

11. Parallel simulation today

NASA Technical Reports Server (NTRS)

Nicol, David; Fujimoto, Richard

1992-01-01

This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

12. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

NASA Technical Reports Server (NTRS)

Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

1991-01-01

Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

13. Parallelizing Timed Petri Net simulations

NASA Technical Reports Server (NTRS)

Nicol, David M.

1993-01-01

The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

14. Eclipse Parallel Tools Platform

SciTech Connect

Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

2005-02-18

Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

15. Parallel Atomistic Simulations

SciTech Connect

HEFFELFINGER,GRANT S.

2000-01-18

Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

16. Parallel Imaging Microfluidic Cytometer

PubMed Central

Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

2011-01-01

By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

17. Real-time trajectory optimization on parallel processors

NASA Technical Reports Server (NTRS)

Psiaki, Mark L.

1993-01-01

A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

18. ZEN2: a narrow J-band search for z ~ 9 Lyα emitting galaxies directed towards three lensing clusters

Willis, J. P.; Courbin, F.; Kneib, J.-P.; Minniti, D.

2008-03-01

We present the results of a continuing survey to detect Lyα emitting galaxies at redshifts z ~ 9: the z equals nine' (ZEN) survey. We have obtained deep VLT Infrared Spectrometer and Array Camera observations in the narrow J-band filter NB119 directed towards three massive lensing clusters: Abell clusters 1689, 1835 and 114. The foreground clusters provide a magnified view of the distant Universe and permit a sensitive test for the presence of very high redshift galaxies. We search for z ~ 9 Lyα emitting galaxies displaying a significant narrow-band excess relative to accompanying J-band observations that remain undetected in Hubble Space Telescope (HST)/Advanced Camera for Surveys (ACS) optical images of each field. No sources consistent with this criterion are detected above the unlensed 90 per cent point-source flux limit of the narrow-band image, FNB = 3.7 × 10-18ergs-1cm-2. To date, the total coverage of the ZEN survey has sampled a volume at z ~ 9 of approximately 1700 comoving Mpc3 to a Lyα emission luminosity of 1043ergs-1. We conclude by considering the prospects for detecting z ~ 9 Lyα emitting galaxies in light of both observed galaxy properties at z < 7 and simulated populations at z > 7.

19. Quantum Search in Hilbert Space

NASA Technical Reports Server (NTRS)

Zak, Michail

2003-01-01

A proposed quantum-computing algorithm would perform a search for an item of information in a database stored in a Hilbert-space memory structure. The algorithm is intended to make it possible to search relatively quickly through a large database under conditions in which available computing resources would otherwise be considered inadequate to perform such a task. The algorithm would apply, more specifically, to a relational database in which information would be stored in a set of N complex orthonormal vectors, each of N dimensions (where N can be exponentially large). Each vector would constitute one row of a unitary matrix, from which one would derive the Hamiltonian operator (and hence the evolutionary operator) of a quantum system. In other words, all the stored information would be mapped onto a unitary operator acting on a quantum state that would represent the item of information to be retrieved. Then one could exploit quantum parallelism: one could pose all search queries simultaneously by performing a quantum measurement on the system. In so doing, one would effectively solve the search problem in one computational step. One could exploit the direct- and inner-product decomposability of the unitary matrix to make the dimensionality of the memory space exponentially large by use of only linear resources. However, inasmuch as the necessary preprocessing (the mapping of the stored information into a Hilbert space) could be exponentially expensive, the proposed algorithm would likely be most beneficial in applications in which the resources available for preprocessing were much greater than those available for searching.

20. A Parallel Algorithm for the Vehicle Routing Problem

SciTech Connect

Groer, Christopher S; Golden, Bruce; Edward, Wasil

2011-01-01

The vehicle routing problem (VRP) is a dicult and well-studied combinatorial optimization problem. We develop a parallel algorithm for the VRP that combines a heuristic local search improvement procedure with integer programming. We run our parallel algorithm with as many as 129 processors and are able to quickly nd high-quality solutions to standard benchmark problems. We assess the impact of parallelism by analyzing our procedure's performance under a number of dierent scenarios.

1. On the Scalability of Parallel UCT

Segal, Richard B.

The parallelization of MCTS across multiple-machines has proven surprisingly difficult. The limitations of existing algorithms were evident in the 2009 Computer Olympiad where Zen using a single four-core machine defeated both Fuego with ten eight-core machines, and Mogo with twenty thirty-two core machines. This paper investigates the limits of parallel MCTS in order to understand why distributed parallelism has proven so difficult and to pave the way towards future distributed algorithms with better scaling. We first analyze the single-threaded scaling of Fuego and find that there is an upper bound on the play-quality improvements which can come from additional search. We then analyze the scaling of an idealized N-core shared memory machine to determine the maximum amount of parallelism supported by MCTS. We show that parallel speedup depends critically on how much time is given to each player. We use this relationship to predict parallel scaling for time scales beyond what can be empirically evaluated due to the immense computation required. Our results show that MCTS can scale nearly perfectly to at least 64 threads when combined with virtual loss, but without virtual loss scaling is limited to just eight threads. We also find that for competition time controls scaling to thousands of threads is impossible not necessarily due to MCTS not scaling, but because high levels of parallelism can start to bump up against the upper performance bound of Fuego itself.

2. Parallel digital forensics infrastructure.

SciTech Connect

Liebrock, Lorie M.; Duggan, David Patrick

2009-10-01

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

3. Parallel MR Imaging

PubMed Central

Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

2015-01-01

Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

4. PCLIPS: Parallel CLIPS

NASA Technical Reports Server (NTRS)

Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

1994-01-01

A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

5. MMS Observations of Parallel Electric Fields

Ergun, R.; Goodrich, K.; Wilder, F. D.; Sturner, A. P.; Holmes, J.; Stawarz, J. E.; Malaspina, D.; Usanova, M.; Torbert, R. B.; Lindqvist, P. A.; Khotyaintsev, Y. V.; Burch, J. L.; Strangeway, R. J.; Russell, C. T.; Pollock, C. J.; Giles, B. L.; Hesse, M.; Goldman, M. V.; Drake, J. F.; Phan, T.; Nakamura, R.

2015-12-01

Parallel electric fields are a necessary condition for magnetic reconnection with non-zero guide field and are ultimately accountable for topological reconfiguration of a magnetic field. Parallel electric fields also play a strong role in charged particle acceleration and turbulence. The Magnetospheric Multiscale (MMS) mission targets these three universal plasma processes. The MMS satellites have an accurate three-dimensional electric field measurement, which can identify parallel electric fields as low as 1 mV/m at four adjacent locations. We present preliminary observations of parallel electric fields from MMS and provide an early interpretation of their impact on magnetic reconnection, in particular, where the topological change occurs. We also examine the role of parallel electric fields in particle acceleration. Direct particle acceleration by parallel electric fields is well established in the auroral region. Observations of double layers in by the Van Allan Probes suggest that acceleration by parallel electric fields may be significant in energizing some populations of the radiation belts. THEMIS observations also indicate that some of the largest parallel electric fields are found in regions of strong field-aligned currents associated with turbulence, suggesting a highly non-linear dissipation mechanism. We discuss how the MMS observations extend our understanding of the role of parallel electric fields in some of the most critical processes in the magnetosphere.

6. Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array

IceCube Collaboration; Pierre Auger Collaboration; Telescope Array Collaboration

2016-01-01

This paper presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECR magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.

7. Search for correlations between the arrival directions of IceCube neutrino events and ultrahigh-energy cosmic rays detected by the Pierre Auger Observatory and the Telescope Array

DOE PAGESBeta

Aartsen, M. G.

2016-01-20

This study presents the results of different searches for correlations between very high-energy neutrino candidates detected by IceCube and the highest-energy cosmic rays measured by the Pierre Auger Observatory and the Telescope Array. We first consider samples of cascade neutrino events and of high-energy neutrino-induced muon tracks, which provided evidence for a neutrino flux of astrophysical origin, and study their cross-correlation with the ultrahigh-energy cosmic ray (UHECR) samples as a function of angular separation. We also study their possible directional correlations using a likelihood method stacking the neutrino arrival directions and adopting different assumptions on the size of the UHECRmore » magnetic deflections. Finally, we perform another likelihood analysis stacking the UHECR directions and using a sample of through-going muon tracks optimized for neutrino point-source searches with sub-degree angular resolution. No indications of correlations at discovery level are obtained for any of the searches performed. The smallest of the p-values comes from the search for correlation between UHECRs with IceCube high-energy cascades, a result that should continue to be monitored.« less

8. Direct Antiglobulin Test

MedlinePlus

9. Search Engine for Antimicrobial Resistance: A Cloud Compatible Pipeline and Web Interface for Rapidly Detecting Antimicrobial Resistance Genes Directly from Sequence Data

PubMed Central

Rowe, Will; Baker, Kate S.; Verner-Jeffreys, David; Baker-Austin, Craig; Ryan, Jim J.; Maskell, Duncan; Pearce, Gareth

2015-01-01

Background Antimicrobial resistance remains a growing and significant concern in human and veterinary medicine. Current laboratory methods for the detection and surveillance of antimicrobial resistant bacteria are limited in their effectiveness and scope. With the rapidly developing field of whole genome sequencing beginning to be utilised in clinical practice, the ability to interrogate sequencing data quickly and easily for the presence of antimicrobial resistance genes will become increasingly important and useful for informing clinical decisions. Additionally, use of such tools will provide insight into the dynamics of antimicrobial resistance genes in metagenomic samples such as those used in environmental monitoring. Results Here we present the Search Engine for Antimicrobial Resistance (SEAR), a pipeline and web interface for detection of horizontally acquired antimicrobial resistance genes in raw sequencing data. The pipeline provides gene information, abundance estimation and the reconstructed sequence of antimicrobial resistance genes; it also provides web links to additional information on each gene. The pipeline utilises clustering and read mapping to annotate full-length genes relative to a user-defined database. It also uses local alignment of annotated genes to a range of online databases to provide additional information. We demonstrate SEAR’s application in the detection and abundance estimation of antimicrobial resistance genes in two novel environmental metagenomes, 32 human faecal microbiome datasets and 126 clinical isolates of Shigella sonnei. Conclusions We have developed a pipeline that contributes to the improved capacity for antimicrobial resistance detection afforded by next generation sequencing technologies, allowing for rapid detection of antimicrobial resistance genes directly from sequencing data. SEAR uses raw sequencing data via an intuitive interface so can be run rapidly without requiring advanced bioinformatic skills or

10. Eclipse Parallel Tools Platform

Energy Science and Technology Software Center (ESTSC)

2005-02-18

Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

11. Cube search, revisited.

PubMed

Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

2015-01-01

Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

12. Cube search, revisited

PubMed Central

Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

2015-01-01

Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

13. Automatic Management of Parallel and Distributed System Resources

NASA Technical Reports Server (NTRS)

Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

1990-01-01

Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

14. Parallel scheduling algorithms

SciTech Connect

Dekel, E.; Sahni, S.

1983-01-01

Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

15. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

NASA Technical Reports Server (NTRS)

Al-Tammami, A.; Singh, B.

1993-01-01

This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

16. A parallel variable metric optimization algorithm

NASA Technical Reports Server (NTRS)

Straeter, T. A.

1973-01-01

An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

17. Massively parallel mathematical sieves

SciTech Connect

Montry, G.R.

1989-01-01

The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

18. Parallel computing works

SciTech Connect

Not Available

1991-10-23

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

19. Multi-fidelity global design optimization including parallelization potential

Cox, Steven Edward

The DIRECT global optimization algorithm is a relatively new space partitioning algorithm designed to determine the globally optimal design within a designated design space. This dissertation examines the applicability of the DIRECT algorithm to two classes of design problems: unimodal functions where small amplitude, high frequency fluctuations in the objective function make optimization difficult; and multimodal functions where multiple local optima are formed by the underlying physics of the problem (as opposed to minor fluctuations in the analysis code). DIRECT is compared with two other multistart local optimization techniques on two polynomial test problems and one engineering conceptual design problem. Three modifications to the DIRECT algorithm are proposed to increase the effectiveness of the algorithm. The DIRECT-BP algorithm is presented which alters the way DIRECT searches the neighborhood of the current best point as optimization progresses. The algorithm reprioritizes which points to analyze at each iteration. This is to encourage analysis of points that surround the best point but that are farther away than the points selected by the DIRECT algorithm. This increases the robustness of the DIRECT search and provides more information on the characteristics of the neighborhood of the point selected as the global optimum. A multifidelity version of the DIRECT algorithm is proposed to reduce the cost of optimization using DIRECT. By augmenting expensive high-fidelity analysis with cheap low-fidelity analysis, the optimization can be performed with fewer high-fidelity analyses. Two correction schemes are examined using high- and low-fidelity results at one point to correct the low-fidelity result at a nearby point. This corrected value is then used in place of a high-fidelity analysis by the DIRECT algorithm. In this way the number of high-fidelity analyses required is reduced and the optimization became less expensive. Finally the DIRECT algorithm is

20. Evidence-based medicine meets goal-directed health care.

PubMed

Mold, James W; Hamm, Robert; Scheid, Dewey

2003-05-01

Evidence-based medicine and goal-directed, patient-centered health care seem, at times, like parallel universes, though, at a conceptual level, they are perfectly compatible. Part of the problem is that many of the kinds of information required for decision making in primary care are often unavailable or difficult to find. Several case examples are used to illustrate this problem, and reasons and solutions are suggested. The goal-directed health care model could be helpful for directing the search for evidence that is relevant to the decisions that patients and their primary care physicians must make on a regular basis. PMID:12772939

1. Parallel processing for scientific computations

NASA Technical Reports Server (NTRS)

Alkhatib, Hasan S.

1991-01-01

The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

2. Parallel nearest neighbor calculations

Trease, Harold

We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

3. Bilingual parallel programming

SciTech Connect

Foster, I.; Overbeek, R.

1990-01-01

Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

4. Parallel system simulation

SciTech Connect

Tai, H.M.; Saeks, R.

1984-03-01

A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

5. The NAS parallel benchmarks

NASA Technical Reports Server (NTRS)

Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

1993-01-01

A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

6. Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in 21 fb-1 of pp collisions at √(s) = 8 TeV with the ATLAS detector

Schneider, Basil

2013-11-01

A search for direct chargino and neutralino production processes is presented in a final state with exactly 3 leptons (electrons or muons). No excess over the Standard Model has been observed. The analysis presented is based on 20.7 fb-1 of proton-proton collision data delivered by the LHC at √(s) = 8 TeV and recorded by the ATLAS [2] detector in 2012. [1

7. Parallel programming with PCN

SciTech Connect

Foster, I.; Tuecke, S.

1991-12-01

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

8. Parallels with nature

2014-10-01

Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

9. The Parallel Axiom

ERIC Educational Resources Information Center

Rogers, Pat

1972-01-01

Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

Merzouk, S.; Winkler, C.; Paul, J. C.

1996-03-01

This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

11. Database Searching by Managers.

ERIC Educational Resources Information Center

Arnold, Stephen E.

Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…

12. Scalable parallel communications

NASA Technical Reports Server (NTRS)

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-01-01

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

13. Parallel image compression

NASA Technical Reports Server (NTRS)

Reif, John H.

1987-01-01

A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

14. Continuous parallel coordinates.

PubMed

Heinrich, Julian; Weiskopf, Daniel

2009-01-01

Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

15. Web Search Engines.

ERIC Educational Resources Information Center

Schwartz, Candy

1998-01-01

Looks briefly at the history of World Wide Web search engine development and considers the current state of affairs. Reflects on future directions in terms of personalization, summarization, query expansion, coverage, and metadata. (Author/AEF)

16. A Programmable Preprocessor for Parallelizing Fortran-90

SciTech Connect

Rosing, Matthew; Yabusaki, Steven B.

1999-07-01

A programmable preprocessor that generates portable and efficient parallel Fortran-90 code has been successfully used in the development of a variety of environmental transport simulators for the Department of Energy. The tool provides the basic functionality of a traditional preprocessor where directives are embedded in a serial Fortran program and interpreted by the preprocessor to produce parallel Fortran code with MPI calls. The unique aspect of this work is that the user can make additions to, or modify, these directives. The directives reside in a preprocessor library and changes to this library can range from small changes to customize an existing library, to larger changes for porting a library, to completely replacing the library. The preprocessor is programmed with a library of directives written in a C-like language, called DL, that has added support for manipulating Fortran code fragments. The primary benefits to the user are twofold: It is fairly easy for any user to generate efficient, parallel code from Fortran-90 with embedded directives, and the long term viability of the user?s software is guaranteed. This is because the source code will always run on a serial machine (the directives are transparent to standard Fortran compilers), and the preprocessor library can be modified to work with different hardware and software environments. A 4000 line preprocessor library has been written and used to parallelize roughly 50,000 lines of groundwater modeling code. The programs have been ported to a wide range of parallel architectures. Performance of these programs is similar to programs explicitly written for a parallel machine. Binaries of the preprocessor core, as well as the preprocessor library source code used in our groundwater modeling codes are currently available.

17. Search Cloud

MedlinePlus

... this page: https://medlineplus.gov/cloud.html Search Cloud To use the sharing features on this page, ... Top 110 zoster vaccine Share the MedlinePlus search cloud with your users by embedding our search cloud ...

18. Search Cloud

MedlinePlus

... www.nlm.nih.gov/medlineplus/cloud.html Search Cloud To use the sharing features on this page, please enable JavaScript. Share the MedlinePlus search cloud with your users by embedding our search cloud ...

19. Balloon-borne direct search for ionizing massive particles as a component of the galactic halo dark matter (The Arizona-IMAX Collaboration)

McGuire, P. C.; Bowen, T.; Barker, D. L.; Halverson, P. G.; Kendall, K. R.; Metcalfe, T. S.; Norton, R. S.; Pifer, A. E.; Barbier, L. M.; Christian, E. R.; Krombel, K. E.; Mitchell, J. W.; Ormes, J. F.; Streitmatter, R. E.; Davis, A. J.; Labrador, A. W.; Mewaldt, R. A.; Schindler, S. M.; Golden, R. L.; Stochaj, S. J.; Webber, W. R.; Arizona-IMAX Collaboration

1995-07-01

A dark matter (DM) search experiment was flown on the IMAX balloon payload to search for a possible minor component of the dark matter in the Galactic halo: ionizing massive particles (IMPs) (mx>~104 GeV/c2) that cannot penetrate the atmosphere due to their low-velocities and high energy-loss. The DM search experiment consisted of a delayed coincidence between four large plastic scintillation detectors arranged in a vertical stack. In order to search for ultra-slow particles which do not slow down in the IMAX telescope, the experiment contained TDCs which measured the time-delay Ti,i+1∈(0.3, 14.0) μs between hits in successive counters to ~2% precision. We present IMP flux limits for non-slowing IMPs and also for IMPs which slow down significantly within the IMAX telescope. This experiment effectively closes much of a previously unconstrained window'' in the mass/cross-section joint parameter spaces for massive particles as the dominant halo DM.

20. Code Parallelization with CAPO: A User Manual

NASA Technical Reports Server (NTRS)

Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

2001-01-01

A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

1. Parallel time integration software

Energy Science and Technology Software Center (ESTSC)

2014-07-01

This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

2. Parallel time integration software

SciTech Connect

2014-07-01

This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

3. Parallel optical sampler

SciTech Connect

Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

2014-05-20

An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

4. Distributed game-tree searching

SciTech Connect

Schaeffer, J. )

1989-02-01

Conventional parallelizations of the alpha-beta ({alpha}{beta}) algorithm have met with limited success. Implementations suffer primarily from the synchronization and search overheads of parallelization. This paper describes a parallel {alpha}{beta} searching program that achieves high performance through the use of four different types of processes: Controllers, Searchers, Table Managers, and Scouts. Synchronization is reduced by having Controller process reassigning idle processes to help out busy ones. Search overhead is reduced by having two types of parallel table management: global Table Managers and the periodic merging and redistribution of local tables. Experiments show that nine processors can achieve 5.67-fold speedups but beyond that, additional processors provide diminishing returns. Given that additional resources are of little benefit, speculative computing is introduced as a means of extending the effective number of processors that can be utilized. Scout processes speculatively search ahead in the tree looking for interesting features and communicate this information back to the {alpha}{beta} program. In this way, the effective search depth is extended. These ideas have been tested experimentally and empirically as part of the chess program ParaPhoenix.

5. Performance Evaluation in Network-Based Parallel Computing

NASA Technical Reports Server (NTRS)

Dezhgosha, Kamyar

1996-01-01

Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

6. The NAS Parallel Benchmarks

SciTech Connect

Bailey, David H.

2009-11-15

The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

NASA Technical Reports Server (NTRS)

Martinez, Tony R.; Vidal, Jacques J.

1988-01-01

Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

8. Speeding up parallel processing

NASA Technical Reports Server (NTRS)

Denning, Peter J.

1988-01-01

In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

9. Programming parallel vision algorithms

SciTech Connect

Shapiro, L.G.

1988-01-01

Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

10. Highly parallel computation

NASA Technical Reports Server (NTRS)

Denning, Peter J.; Tichy, Walter F.

1990-01-01

Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

11. Coarrars for Parallel Processing

NASA Technical Reports Server (NTRS)

Snyder, W. Van

2011-01-01

The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

12. Search for direct pair production of scalar top quarks in the single- and dilepton channels in proton-proton collisions at √{s}=8 TeV

Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Knünz, V.; König, A.; Krammer, M.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Cornelis, T.; de Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; van de Klundert, M.; van Haevermaet, H.; van Mechelen, P.; van Remortel, N.; van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; de Bruyn, I.; Deroover, K.; Heracleous, N.; Keaveney, J.; Lowette, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; van Doninck, W.; van Mulders, P.; van Onsem, G. P.; van Parijs, I.; Barria, P.; Brun, H.; Caillol, C.; Clerbaux, B.; de Lentdecker, G.; Fang, W.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Maerschalk, T.; Marinov, A.; Perniè, L.; Randle-Conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Beernaert, K.; Benucci, L.; Cimmino, A.; Crucy, S.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; McCartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva, S.; Sigamani, M.; Tytgat, M.; van Driessche, W.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; Ceard, L.; Delaere, C.; Favart, D.; Forthomme, L.; Giammanco, A.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Mertens, A.; Musich, M.; Nuttens, C.; Perrini, L.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Beliy, N.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Hamer, M.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; da Costa, E. M.; de Jesus Damiao, D.; de Oliveira Martins, C.; Fonseca de Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; de Souza Santos, A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Leggat, D.; Plestina, R.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Micanovic, S.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Bodlak, M.; Finger, M.; Finger, M.; Assran, Y.; Elgammal, S.; Ellithi Kamel, A.; Mahmoud, M. A.; Calpas, B.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Peltola, T.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Filipovic, N.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Mastrolorenzo, L.; Miné, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Merlin, J. A.; Skovpen, K.; van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.

2016-07-01

Results are reported from a search for the top squark, the lighter of the two supersymmetric partners of the top quark. The data sample corresponds to 19.7 inverse femtobarns of proton-proton collisions at sqrt(s) = 8 TeV collected with the CMS detector at the LHC. The search targets top squark to b chi+/- and top squark to t(*) chi0 decay modes, where chi+/- and chi0 are the lightest chargino and neutralino, respectively. The reconstructed final state consists of jets, b jets, missing transverse energy, and either one or two leptons. Leading backgrounds are determined from data. No significant excess in data is observed above the expectation from standard model processes. The results exclude a region of the two-dimensional plane of possible top squark and chi0 masses. The highest excluded top squark and chi0 masses are about 700 GeV and 250 GeV, respectively.

13. Trust in online prescription drug information among internet users: the impact on information search behavior after exposure to direct-to-consumer advertising.

PubMed

Menon, Ajit M; Deshpande, Aparna D; Perri, Matthew; Zinkhan, George M

2002-01-01

The proliferation of both manufacturer-controlled and independent medication-related websites has aroused concern among consumers and policy-makers concerning the trustworthiness of Web-based drug information. The authors examine consumers' trust in on-line prescription drug information and its influence on information search behavior. The study design involves a retrospective analysis of data from a 1998 national survey. The findings reveal that trust in drug information from traditional media sources such as television and newspapers transfers to the domain of the Internet. Furthermore, a greater trust in on-line prescription drug information stimulates utilization of the Internet for information search after exposure to prescription drug advertising. PMID:12749596

14. Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in TeV collisions with the ATLAS detector

2015-05-01

A search is presented for the direct pair production of a chargino and a neutralino , where the chargino decays to the lightest neutralino and the boson, , while the neutralino decays to the lightest neutralino and the 125 GeV Higgs boson, . The final states considered for the search have large missing transverse momentum, an isolated electron or muon, and one of the following: either two jets identified as originating from bottom quarks, or two photons, or a second electron or muon with the same electric charge. The analysis is based on 20.3 of proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with the Standard Model expectations, and limits are set in the context of a simplified supersymmetric model.

15. Search for direct top squark pair production in final states with two tau leptons in pp collisions at √{s}=8 TeV with the ATLAS detector

2016-02-01

A search for direct pair production of the supersymmetric partner of the top quark, decaying via a scalar tau to a nearly massless gravitino, has been performed using 20 fb^{-1} of proton-proton collision data at √{s}=8 TeV. The data were collected by the ATLAS experiment at the LHC in 2012. Top squark candidates are searched for in events with either two hadronically decaying tau leptons, one hadronically decaying tau and one light lepton, or two light leptons. No significant excess over the Standard Model expectation is found. Exclusion limits at 95 % confidence level are set as a function of the top squark and scalar tau masses. Depending on the scalar tau mass, ranging from the 87 GeV LEP limit to the top squark mass, lower limits between 490 and 650 GeV are placed on the top squark mass within the model considered.

16. Search for direct chargino production in anomaly-mediated supersymmetry breaking models based on a disappearing-track signature in pp collisions at sqrt{s}=7TeV with the ATLAS detector

2013-01-01

A search for direct chargino production in anomaly-mediated supersymmetry breaking scenarios is performed in pp collisions at sqrt{s}=7TeV using 4.7 fb-1 of data collected with the ATLAS experiment at the LHC. In these models, the lightest chargino is predicted to have a lifetime long enough to be detected in the tracking detectors of collider experiments. This analysis explores such models by searching for chargino decays that result in tracks with few associated hits in the outer region of the tracking system. The transverse-momentum spectrum of candidate tracks is found to be consistent with the expectation from the Standard Model background processes and constraints on chargino properties are obtained.

17. Parallel Plate System for Collecting Data Used to Determine Viscosity

NASA Technical Reports Server (NTRS)

Kaukler, William (Inventor); Ethridge, Edwin C. (Inventor)

2013-01-01

A parallel-plate system collects data used to determine viscosity. A first plate is coupled to a translator so that the first plate can be moved along a first direction. A second plate has a pendulum device coupled thereto such that the second plate is suspended above and parallel to the first plate. The pendulum device constrains movement of the second plate to a second direction that is aligned with the first direction and is substantially parallel thereto. A force measuring device is coupled to the second plate for measuring force along the second direction caused by movement of the second plate.

18. Search for direct pair production of supersymmetric top quarks decaying to all-hadronic final states in pp collisions at √s = 8 TeV

DOE PAGESBeta

Khachatryan, Vardan

2016-08-16

Here, results are reported from a search for the pair production of top squarks, the supersymmetric partners of top quarks, in final states with jets and missing transverse momentum. The data sample used in this search was collected by the CMS detector and corresponds to an integrated luminosity of 18.9 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV produced by the LHC. The search features novel background suppression and prediction methods, including a dedicated top quark pair reconstruction algorithm. The data are found to be in agreement with the predicted backgrounds. Exclusion limits are set in simplifiedmore » supersymmetry models with the top squark decaying to jets and an undetected neutralino, either via a top quark or through a bottom quark and chargino. Models with the top squark decaying via a top quark are excluded for top squark masses up to 755 GeV in the case of neutralino masses below 200 GeV. For decays via a chargino, top squark masses up to 620 GeV are excluded, depending on the masses of the chargino and neutralino.« less

19. Search for direct pair production of supersymmetric top quarks decaying to all-hadronic final states in pp collisions at sqrt(s) = 8 TeV

DOE PAGESBeta

Khachatryan, Vardan; et al.

2016-03-02

Results are reported from a search for the pair production of top squarks, the supersymmetric partners of top quarks, in final states with jets and missing transverse momentum. The data sample used in this search was collected by the CMS detector and corresponds to an integrated luminosity of 18.9 inverse femtobarns of proton-proton collisions at a centre-of-mass energy of 8 TeV produced by the LHC. The search features novel background suppression and prediction methods, including a dedicated top quark pair reconstruction algorithm. The data are found to be in agreement with the predicted backgrounds. Exclusion limits are set in simplifiedmore » supersymmetry models with the top squark decaying to jets and an undetected neutralino, either via a top quark or through a bottom quark and chargino. Models with the top squark decaying via a top quark are excluded for top squark masses up to 755 GeV in the case of neutralino masses below 200 GeV. For decays via a chargino, top squark masses up to 620 GeV are excluded, depending on the masses of the chargino and neutralino.« less

20. Voltage and Reactive Power Control by Parallel Calculation Processing

Michihata, Masashi; Aoki, Hidenori; Mizutani, Yoshibumi

This paper presents a new approach to optimal voltage and reactive power control based on a genetic algorithm (GA) and a tabu search (TS). To reduce time to calculate the control procedure, the parallel computation using Linux is executed. In addition, TS and GA are calculated by the master and each slave based on the parallel program language. The effectiveness of the proposed method is demonstrated by practical 118-bus system.

SciTech Connect

Perumalla, Kalyan S; Park, Alfred J

2014-01-01

In simulating large parallel systems, bottom-up approaches exercise detailed hardware models with effects from simplified software models or traces, whereas top-down approaches evaluate the timing and functionality of detailed software models over coarse hardware models. Here, we focus on the top-down approach and significantly advance the scale of the simulated parallel programs. Via the direct execution technique combined with parallel discrete event simulation, we stretch the limits of the top-down approach by simulating message passing interface (MPI) programs with millions of tasks. Using a timing-validated benchmark application, a proof-of-concept scaling level is achieved to over 0.22 billion virtual MPI processes on 216,000 cores of a Cray XT5 supercomputer, representing one of the largest direct execution simulations to date, combined with a multiplexing ratio of 1024 simulated tasks per real task.

2. Parallel Total Energy

Energy Science and Technology Software Center (ESTSC)

2004-10-21

This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

3. NAS Parallel Benchmarks Results

NASA Technical Reports Server (NTRS)

Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

1995-01-01

The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

4. High performance parallel architectures

SciTech Connect

Anderson, R.E. )

1989-09-01

In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

5. Parallel programming with PCN

SciTech Connect

Foster, I.; Tuecke, S.

1993-01-01

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

6. Parallel Multigrid Equation Solver

Energy Science and Technology Software Center (ESTSC)

2001-09-07

Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.