Science.gov

Sample records for partially parallel acquisitions

  1. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  2. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system. PMID:26669509

  3. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  4. New architecture of fast parallel multiplier using fast parallel counter with FPA (first partial product addition)

    NASA Astrophysics Data System (ADS)

    Lee, Mike M.; Cho, Byung Lok

    2001-11-01

    In this paper, we proposed a new First Partial product Addition (FPA) architecture with new compressor (or parallel counter) to CSA tree built in the process of adding partial product for improving speed in the fast parallel multiplier to improve the speed of calculating partial product by about 20% compared with existing parallel counter using full Adder. The new circuit reduces the CLA bit finding final sum by N/2 using the novel FPA architecture. A 5.14ns of multiplication speed of the 16X16 multiplier is obtained using 0.25um CMOS technology. The architecture of the multiplier is easily opted for pipeline design and demonstrates high speed performance.

  5. Solution of partial differential equations on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1985-01-01

    The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed.

  6. The Force Singularity for Partially Immersed Parallel Plates

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Rajat; Finn, Robert

    2016-05-01

    In earlier work, we provided a general description of the forces of attraction and repulsion, encountered by two parallel vertical plates of infinite extent and of possibly differing materials, when partially immersed in an infinite liquid bath and subject to surface tension forces. In the present study, we examine some unusual details of the exotic behavior that can occur at the singular configuration separating infinite rise from infinite descent of the fluid between the plates, as the plates approach each other. In connection with this singular behavior, we present also some new estimates on meniscus height details.

  7. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.

    2013-01-01

    Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity

  8. Applicability of Parallel Computing to Partial Wave Analysis

    NASA Astrophysics Data System (ADS)

    Ruger, Justin; Gilfoyle, Gerard; Weygand, Dennis; CLAS Collaboration

    2013-10-01

    Bound states of Quantum Chromodynamics (QCD) give insights into the nature of confinement, a key element of the strong interaction. States may be identified from weak signals extracted from the analysis of high statistics data from reactions with many final state particles. One of the best tools for the analysis of these reactions is Partial Wave Analysis (PWA). PWA transforms an ensemble of experimental data from a large acceptance detector from free particle eigenstates to angular momentum eigenstates. The PWA program must be fast enough to deal with the large amounts of data available currently, as processing time scales with the number of events. The scope of this research is to study the applicability and scalability of Intel's Xeon Phi using the Many Integrated Core (MIC) architecture when applied to the existing PWA code at Jefferson Laboratory. An algorithm was developed for the Xeon Phi and scaled across 240 available threads, giving parallel functionality to the PWA which was originally written serially. This scaling can make the fitting process fifteen times faster. Supported by the US Department of Energy.

  9. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  10. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  11. Learning in Parallel: Using Parallel Corpora to Enhance Written Language Acquisition at the Beginning Level

    ERIC Educational Resources Information Center

    Bluemel, Brody

    2014-01-01

    This article illustrates the pedagogical value of incorporating parallel corpora in foreign language education. It explores the development of a Chinese/English parallel corpus designed specifically for pedagogical application. The corpus tool was created to aid language learners in reading comprehension and writing development by making foreign…

  12. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    SciTech Connect

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  13. Performance of a VME-based parallel processing LIDAR data acquisition system (summary)

    SciTech Connect

    Moore, K.; Buttler, B.; Caffrey, M.; Soriano, C.

    1995-05-01

    It may be possible to make accurate real time, autonomous, 2 and 3 dimensional wind measurements remotely with an elastic backscatter Light Detection and Ranging (LIDAR) system by incorporating digital parallel processing hardware into the data acquisition system. In this paper, we report the performance of a commercially available digital parallel processing system in implementing the maximum correlation technique for wind sensing using actual LIDAR data. Timing and numerical accuracy are benchmarked against a standard microprocessor impementation.

  14. Note on parallel processing techniques for algebraic equations, ordinary differential equations and partial differential equations

    SciTech Connect

    Allidina, A.Y.; Malinowski, K.; Singh, M.G.

    1982-12-01

    The possibilities were explored for enhancing parallelism in the simulation of systems described by algebraic equations, ordinary differential equations and partial differential equations. These techniques, using multiprocessors, were developed to speed up simulations, e.g. for nuclear accidents. Issues involved in their design included suitable approximations to bring the problem into a numerically manageable form and a numerical procedure to perform the computations necessary to solve the problem accurately. Parallel processing techniques used as simulation procedures, and a design of a simulation scheme and simulation procedure employing parallel computer facilities, were both considered.

  15. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  16. Parent-Implemented Mand Training: Acquisition of Framed Manding in a Young Boy with Partial Hemispherectomy

    ERIC Educational Resources Information Center

    Ingvarsson, Einar T.

    2011-01-01

    This study examined the effects of parent-implemented mand training on the acquisition of framed manding in a 4-year-old boy who had undergone partial hemispherectomy. Framed manding became the predominant mand form when and only when the intervention was implemented with each preferred toy, but minimal generalization to untrained toys …

  17. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    SciTech Connect

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  18. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  19. Adaptive methods and parallel computation for partial differential equations. Final report

    SciTech Connect

    Biswas, R.; Benantar, M.; Flaherty, J.E.

    1992-05-01

    Consider the adaptive solution of two-dimensional vector systems of hyperbolic and elliptic partial differential equations on shared-memory parallel computers. Hyperbolic systems are approximated by an explicit finite volume technique and solved by a recursive local mesh refinement procedure on a tree-structured grid. Local refinement of the time steps and spatial cells of a coarse base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. Computational procedures that sequentially traverse the tree while processing solutions on each grid in parallel, that process solutions at the same tree level in parallel, and that dynamically assign processors to nodes of the tree have been developed and applied to an example. Computational results comparing a variety of heuristic processor load balancing techniques and refinement strategies are presented.

  20. Fast parallel algorithms and enumeration techniques for partial k-trees

    SciTech Connect

    Narayanan, C.

    1989-01-01

    Recent research by several authors have resulted in systematic way of developing linear-time sequential algorithms for a host of problem: on a fairly general class of graphs variously known as bounded decomposable graphs, graphs of bounded treewidth, partial k-trees, etc. Partial k-trees arise in a variety of real-life applications such as network reliability, VLSI design and database systems and hence fast sequential algorithms on these graphs have been found to be desirable. The linear-time methodologies were independently developed by Bern, Lawler, and Wong ((10)), Arnborg and Proskurowski ((6)), Bodlaender ((14)), and Courcelle ((25)). Wimer ((89)) significantly extended the work of Bern, Lawler and Wong. All of these approaches share the common thread of using dynamic programming on a tree structure. In particular the methodology of Wimer uses a parse-tree as the data structure. The methodologies claim linear-time algorithms on partial k-trees for fixed k, for a number of combinatorial optimization problems given the tree structure as input. It is known that obtaining the tree structure is NP-hard. This dissertation investigates three important classes of problems: (1) Developing parallel algorithms for constructing a k-tree embedding, finding a tree decomposition and most notably obtaining a parse-tree for a partial k-tree. (2) Developing parallel algorithms for parse-tree computations, testing isomorphism of k-trees, and finding a 2-tree embedding of a cactus. (3) Obtaining techniques for counting vertex/edge subsets satisfying a certain property in some classes of partial k-trees. The parallel algorithms the author has developed are in class NC and are either new or improve upon the existing results of Bodlaender (13). The difference equations he has obtained for counting certain sub-graphs are not known in the literature so far.

  1. Parallel gene loss and acquisition among strains of different Brucella species and biovars.

    PubMed

    Zhong, Zhijun; Wang, Yufei; Xu, Jie; Chen, Yanfen; Ke, Yuehua; Zhou, Xiaoyan; Yuan, Xitong; Zhou, Dongsheng; Yang, Yi; Yang, Ruifu; Peng, Guangneng; Jiang, Hai; Yuan, Jing; Song, Hongbin; Cui, Buyun; Huang, Liuyu; Chen, Zeliang

    2012-08-01

    The genus Brucella is divided into six species; of these, B. melitensis and B. abortus are pathogenic to humans, and B. ovis and B. neotomae are nonpathogenic to humans. The definition of gene loss and acquisition is essential for understanding Brucella's ecology, evolutionary history, and host relationships. A DNA microarray containing unique genes of B. melitensis Type strain 16MT and B. abortus 9-941 was constructed and used to determine the gene contents of the representative strains of Brucella. Phylogenetic relationships were inferred from sequences of housekeeping genes. Gene loss and acquisition of different Brucella species were inferred. A total of 214 genes were found to be differentially distributed, and 173 of them were clustered into 15 genomic islands (GIs). Evidence of horizontal gene transfer was observed for 10 GIs. Phylogenetic analysis indicated that the 19 strains formed five clades, and some of the GIs had been lost or acquired independently among the different lineages. The derivation of Brucella lineages is concomitant with the parallel loss or acquisition of GIs, indicating a complex interaction between various Brucella species and hosts. PMID:22923103

  2. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2010-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by studying natural speech acquisition, and it provides a means of probing the boundaries and constraints that general auditory perception and cognition bring to the task of speech category learning. In this study, we used a multimodal, video-game-based implicit learning paradigm to train participants to categorize acoustically complex, nonlinguistic sounds. Mismatch negativity responses to the nonspeech stimuli were collected before and after training to investigate the degree to which neural changes supporting the learning of these nonspeech categories parallel those typically observed for speech category acquisition. Results indicate that changes in mismatch negativity resulting from the nonspeech category learning closely resemble patterns of change typically observed during speech category learning. This suggests that the often-observed “specialized” neural responses to speech sounds may result, at least in part, from the expertise we develop with speech categories through experience rathr than from properties unique to speech (e.g., linguistic or vocal tract gestural information). Furthermore, particular characteristics of the training paradigm may inform our understanding of mechanisms that support natural speech acquisition. PMID:19929331

  3. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    NASA Astrophysics Data System (ADS)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  4. K-t sparse GROWL: sequential combination of partially parallel imaging and compressed sensing in k-t space using flexible virtual coil.

    PubMed

    Huang, Feng; Lin, Wei; Duensing, George R; Reykowski, Arne

    2012-09-01

    Because dynamic MR images are often sparse in x-f domain, k-t space compressed sensing (k-t CS) has been proposed for highly accelerated dynamic MRI. When a multichannel coil is used for acquisition, the combination of partially parallel imaging and k-t CS can improve the accuracy of reconstruction. In this work, an efficient combination method is presented, which is called k-t sparse Generalized GRAPPA fOr Wider readout Line. One fundamental aspect of this work is to apply partially parallel imaging and k-t CS sequentially. A partially parallel imaging technique using a Generalized GRAPPA fOr Wider readout Line operator is adopted before k-t CS reconstruction to decrease the reduction factor in a computationally efficient way while preserving temporal resolution. Channel combination and relative sensitivity maps are used in the flexible virtual coil scheme to alleviate the k-t CS computational load with increasing number of channels. Using k-t FOCUSS as a specific example of k-t CS, the experiments with Cartesian and radial data sets demonstrate that k-t sparse Generalized GRAPPA fOr Wider readout Line can produce results with two times lower root-mean-square error than conventional channel-by-channel k-t CS while consuming up to seven times less computational cost. PMID:22162191

  5. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    SciTech Connect

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs.

  6. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    SciTech Connect

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed.

  7. Parallels between control PDE's (Partial Differential Equations) and systems of ODE's (Ordinary Differential Equations)

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Villarreal, Ramiro

    1987-01-01

    System theorists understand that the same mathematical objects which determine controllability for nonlinear control systems of ordinary differential equations (ODEs) also determine hypoellipticity for linear partial differentail equations (PDEs). Moreover, almost any study of ODE systems begins with linear systems. It is remarkable that Hormander's paper on hypoellipticity of second order linear p.d.e.'s starts with equations due to Kolmogorov, which are shown to be analogous to the linear PDEs. Eigenvalue placement by state feedback for a controllable linear system can be paralleled for a Kolmogorov equation if an appropriate type of feedback is introduced. Results concerning transformations of nonlinear systems to linear systems are similar to results for transforming a linear PDE to a Kolmogorov equation.

  8. Parallelizing across time when solving time-dependent partial differential equations

    SciTech Connect

    Worley, P.H.

    1991-09-01

    The standard numerical algorithms for solving time-dependent partial differential equations (PDEs) are inherently sequential in the time direction. This paper describes algorithms for the time-accurate solution of certain classes of linear hyperbolic and parabolic PDEs that can be parallelized in both time and space and have serial complexities that are proportional to the serial complexities of the best known algorithms. The algorithms for parabolic PDEs are variants of the waveform relaxation multigrid method (WFMG) of Lubich and Ostermann where the scalar ordinary differential equations (ODEs) that make up the kernel of WFMG are solved using a cyclic reduction type algorithm. The algorithms for hyperbolic PDEs use the cyclic reduction algorithm to solve ODEs along characteristics. 43 refs.

  9. Parallel Proportion Fair Scheduling in DAS with Partial Channel State Information

    NASA Astrophysics Data System (ADS)

    Jiang, Zhanjun; Wu, Jiang; Wang, Dongming; You, Xiaohu

    A parallel multiplexing scheduling (PMS) scheme is proposed for distributed antenna systems (DAS), which greatly improves average system throughput due to multi-user diversity and multi-user multiplexing. However, PMS has poor fairness because of the use of the “best channel selection” criteria in the scheduler. Thus we present a parallel proportional fair scheduling (PPFS) scheme, which combines PMS with proportional fair scheduling (PFS) to achieve a tradeoff between average throughput and fairness. In PPFS, the “relative signal to noise ratio (SNR)” is employed as a metric to select the user instead of the “relative throughput” in the original PFS. And only partial channel state information (CSI) is fed back to the base station (BS) in PPFS. Moreover, there are multiple users selected to transmit simultaneously at each slot in PPFS, while only one user occupies all channel resources at each slot in PFS. Consequently, PPFS improves fairness performance of PMS greatly with a relatively small loss of average throughput compared to PFS.

  10. L2 and Deaf Learners' Knowledge of Numerically Quantified English Sentences: Acquisitional Parallels at the Semantics/Discourse-Pragmatics Interface

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Schueler-Choukairi, Tanya

    2012-01-01

    This study assessed knowledge of numerically quantified English sentences in two learner populations--second language (L2) learners and deaf learners--whose acquisition of English occurs under conditions of restricted access to the target language input. Under the experimental test conditions, interlanguage parallels were predicted to arise from…

  11. Cascade connection serial parallel hybrid acquisition synchronization method for DS-FHSS in air-ground data link

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhou, Desuo

    2007-11-01

    In air-ground tactical data link system, a kind of primary anti jamming technology adopted is direct sequence - frequency hopping spread spectrum (DS-FHSS) technology. However, how to implement the quick synchronization of DS-FHSS is an important technology problem, which could influence the whole communication capability of system. Thinking of the application demand of anti jamming technology in actual, a kind of cascade connection serial parallel hybrid acquisition synchronization method is given for the DS-FHSS system. The synchronization consists of two stages. The synchronization of FH communication is adopted at the first stage, and the serial parallel hybrid structure is adopted for the DS PN code synchronization at the secondary stage. Through calculating the detect probability of the FH synchronization acquisition and the acquisition time of DS code chip synchronization, the contribution to the synchronization capability of system by this method in this paper is analyzed. Finally, through simulating on computer, the performance estimate about this cascade connection serial parallel hybrid acquisition synchronization method is given.

  12. Parallel data acquisition of in-source fragmented glycopeptides to sequence the glycosylation sites of proteins.

    PubMed

    Zhao, Jingfu; Song, Ehwang; Zhu, Rui; Mechref, Yehia

    2016-06-01

    Glycosylation plays important roles in maintaining protein stability and controlling biological processes. In recent years, the correlation between aberrant glycoproteins and many diseases has been reported. Hence, qualitative and quantitative analyses of glycoproteins are necessary to understand physiological processes. LC-MS/MS analysis of glycopeptides is faced with the low glycopeptide signal intensities and low peptide sequence identification. In our study, in-source fragmentation (ISF) was used in conjunction with LC-MS/MS to facilitate the parallel acquisition of peptide backbone sequence and glycan composition information. In ISF method, the identification of glycosylation sites depended on the detection of Y1 ion (ion of peptide backbone with an N-acetylglucosamine attached). To attain dominant Y1 ions, a range of source fragmentation voltages was studied using fetuin. A 45 V ISF voltage was found to be the most efficient voltage for the analysis of glycoproteins. ISF was employed to study the glycosylation sites of three model glycoproteins, including fetuin, α1-acid glycoprotein and porcine thyroglobulin. The approach was then used to analyze blood serum samples. Y1 ions of glycopeptides in tryptic digests of samples were detected. Y1 ions of glycopeptides with different sialic acid groups are observed at different retention times, representing the various numbers of sialic acid moieties associated with the same peptide backbone sequence. With ISF facilitating the peptide backbone sequencing of glycopeptides, identified peptide sequence coverage was increased. For example, identified fetuin sequence percentage was improved from 39 to 80% in MASCOT database searching compared to conventional CID method. The formation of Y1 ions and oxonium ions in ISF facilitates glycopeptide sequencing and glycan composition identification. PMID:26957414

  13. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  14. Parallel Bimodal Bilingual Acquisition: A Hearing Child Mediated in a Deaf Family

    ERIC Educational Resources Information Center

    Cramér-Wolrath, Emelie

    2013-01-01

    The aim of this longitudinal case study was to describe bimodal and bilingual acquisition in a hearing child, Hugo, especially the role his Deaf family played in his linguistic education. Video observations of the family interactions were conducted from the time Hugo was 10 months of age until he was 40 months old. The family language was Swedish…

  15. A Partial Order Reduction Technique for Parallel Timed Automaton Model Checking

    NASA Astrophysics Data System (ADS)

    Jianhua, Zhao; Linzhang, Wang; Xuandong, Li

    We propose a partial order reduction technique for timed automaton model checking in this paper. We first show that the symbolic successors w.r.t. partial order paths can be computed using DBMs. An algorithm is presented to compute such successors incrementally. This algorithm can avoid splitting the symbolic states because of the enumeration order of independent transitions. A reachability analysis algorithm based on this successor computation algorithm is presented. Our technique can be combined with some static analysis techniques in the literate. Further more, we present a rule to avoid exploring all enabled transitions, thus the space requirements of model checking are further reduced.

  16. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  17. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  18. Avoidance prone individuals self reporting behavioral inhibition exhibit facilitated acquisition and altered extinction of conditioned eyeblinks with partial reinforcement schedules

    PubMed Central

    Allen, Michael Todd; Myers, Catherine E.; Servatius, Richard J.

    2014-01-01

    Avoidance in the face of novel situations or uncertainty is a prime feature of behavioral inhibition which has been put forth as a risk factor for the development of anxiety disorders. Recent work has found that behaviorally inhibited (BI) individuals acquire conditioned eyeblinks faster than non-inhibited (NI) individuals in omission and yoked paradigms in which the predictive relationship between the conditioned stimulus (CS) and unconditional stimulus (US) is less than optimal as compared to standard training with CS-US paired trials (Holloway et al., 2014). In the current study, we tested explicitly partial schedules in which half the trials were CS alone or US alone trials in addition to the standard CS-US paired trials. One hundred and forty nine college-aged undergraduates participated in the study. All participants completed the Adult Measure of Behavioral Inhibition (i.e., AMBI) which was used to group participants as BI and NI. Eyeblink conditioning consisted of three US alone trials, 60 acquisition trials, and 20 CS-alone extinction trials presented in one session. Conditioning stimuli were a 500 ms tone CS and a 50-ms air puff US. Behaviorally inhibited individuals receiving 50% partial reinforcement with CS alone or US alone trials produced facilitated acquisition as compared to NI individuals. A partial reinforcement extinction effect (PREE) was evident with CS alone trials in BI but not NI individuals. These current findings indicate that avoidance prone individuals self-reporting behavioral inhibition over-learn an association and are slow to extinguish conditioned responses (CRs) when there is some level of uncertainty between paired trials and CS or US alone presentations. PMID:25339877

  19. Effective Five Directional Partial Derivatives-Based Image Smoothing and a Parallel Structure Design.

    PubMed

    Choongsang Cho; Sangkeun Lee

    2016-04-01

    Image smoothing has been used for image segmentation, image reconstruction, object classification, and 3D content generation. Several smoothing approaches have been used at the pre-processing step to retain the critical edge, while removing noise and small details. However, they have limited performance, especially in removing small details and smoothing discrete regions. Therefore, to provide fast and accurate smoothing, we propose an effective scheme that uses a weighted combination of the gradient, Laplacian, and diagonal derivatives of a smoothed image. In addition, to reduce computational complexity, we designed and implemented a parallel processing structure for the proposed scheme on a graphics processing unit (GPU). For an objective evaluation of the smoothing performance, the images were linearly quantized into several layers to generate experimental images, and the quantized images were smoothed using several methods for reconstructing the smoothly changed shape and intensity of the original image. Experimental results showed that the proposed scheme has higher objective scores and better successful smoothing performance than similar schemes, while preserving and removing critical and trivial details, respectively. For computational complexity, the proposed smoothing scheme running on a GPU provided 18 and 16 times lower complexity than the proposed smoothing scheme running on a CPU and the L0-based smoothing scheme, respectively. In addition, a simple noise reduction test was conducted to show the characteristics of the proposed approach; it reported that the presented algorithm outperforms the state-of-the art algorithms by more than 5.4 dB. Therefore, we believe that the proposed scheme can be a useful tool for efficient image smoothing. PMID:26886985

  20. A new parallel solver suited for arbitrary semilinear parabolic partial differential equations based on generalized random trees

    NASA Astrophysics Data System (ADS)

    Acebrón, Juan A.; Rodríguez-Rozas, Ángel

    2011-09-01

    A probabilistic representation for initial value semilinear parabolic problems based on generalized random trees has been derived. Two different strategies have been proposed, both requiring generating suitable random trees combined with a Pade approximant for approximating accurately a given divergent series. Such series are obtained by summing the partial contribution to the solution coming from trees with arbitrary number of branches. The new representation greatly expands the class of problems amenable to be solved probabilistically, and was used successfully to develop a generalized probabilistic domain decomposition method. Such a method has been shown to be suited for massively parallel computers, enjoying full scalability and fault tolerance. Finally, a few numerical examples are given to illustrate the remarkable performance of the algorithm, comparing the results with those obtained with a classical method.

  1. Microparticle drug sequestration provides a parallel pathway in the acquisition of cancer drug resistance.

    PubMed

    Gong, Joyce; Luk, Frederick; Jaiswal, Ritu; George, Anthony M; Grau, Georges Emile Raymond; Bebawy, Mary

    2013-12-01

    Expanding on our previous findings demonstrating that microparticles (MPs) spread cancer multidrug resistance, we now show that MPs sequester drugs, reducing the free drug concentration available to cells. MPs were isolated from drug-sensitive and drug-resistant sub-clones of a human breast adenocarcinoma cell line and from human acute lymphoblastic leukemia cells. MPs were assessed for size, mitochondria, RNA and phospholipid content, P-glycoprotein (P-gp) expression and orientation and ATPase activity relative to drug sequestration capacity. Of the drug classes examined, MPs sequestered the anthracycline class to a significant degree. The degree of sequestration was likely due to the size of MPs and thus the amount of cargo they contain, to which the anthracyclines bind. Moreover, a proportion of the P-gp present on MPs was inside-out in orientation, enabling it to influx drugs rather than its typical efflux function. This was confirmed by surface immunofluorescence and by assessment of drug-stimulated ATPase activity following MP permeabilization. Thus we determined that breast cancer MPs carried a proportion of their P-gp oriented inside-out, providing active sequestration within the microvesicular compartment. These results demonstrate a capacity for MPs to sequester chemotherapeutic drugs, which has a predominantly active sequestration component for MPs derived from drug-resistant cells and a predominantly passive component for MPs derived from drug-sensitive cells. This reduction in available drug concentration has potential to contribute to a parallel pathway and complements that of the intercellular transfer of P-gp. These findings lend further support to the role of MPs in limiting the successful management of cancer. PMID:24095666

  2. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  3. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    NASA Astrophysics Data System (ADS)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  4. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition.

    PubMed

    Bruder, H; Raupach, R; Sunnegardh, J; Allmendinger, T; Klotz, E; Stierstorfer, K; Flohr, T

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high.In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR 'source' image data, while maintaining their resolution.It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover spatial

  5. Parallel acquisition of Raman spectra from a 2D multifocal array using a modulated multifocal detection scheme

    NASA Astrophysics Data System (ADS)

    Kong, Lingbo; Chan, James W.

    2015-03-01

    A major limitation of spontaneous Raman scattering is its intrinsically weak signals, which makes Raman analysis or imaging of biological specimens slow and impractical for many applications. To address this, we report the development of a novel modulated multifocal detection scheme for simultaneous acquisition of full Raman spectra from a 2-D m × n multifocal array. A spatial light modulator (SLM), or a pair of galvo-mirrors, is used to generate m × n laser foci. Raman signals generated within each focus are projected simultaneously into a spectrometer and detected by a CCD camera. The system can resolve the Raman spectra with no crosstalk along the vertical pixels of the CCD camera, e.g., along the entrance slit of the spectrometer. However, there is significant overlap of the spectra in the horizontal pixel direction, e.g., along the dispersion direction. By modulating the excitation multifocal array (illumination modulation) or the emitted Raman signal array (detection modulation), the superimposed Raman spectra of different multifocal patterns are collected. The individual Raman spectrum from each focus is then retrieved from the superimposed spectra using a postacquisition data processing algorithm. This development leads to a significant improvement in the speed of acquiring Raman spectra. We discuss the application of this detection scheme for parallel analysis of individual cells with multifocus laser tweezers Raman spectroscopy (M-LTRS) and for rapid confocal hyperspectral Raman imaging.

  6. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC (Superconducting Super Collider) detectors

    SciTech Connect

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C. ); Lockyer, N.; VanBerg, R. )

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab.

  7. Comparative Analysis on the Performance of a Short String of Series-Connected and Parallel-Connected Photovoltaic Array Under Partial Shading

    NASA Astrophysics Data System (ADS)

    Vijayalekshmy, S.; Rama Iyer, S.; Beevi, Bisharathu

    2015-09-01

    The output power from the photovoltaic (PV) array decreases and the array exhibit multiple peaks when it is subjected to partial shading (PS). The power loss in the PV array varies with the array configuration, physical location and the shading pattern. This paper compares the relative performance of a PV array consisting of a short string of three PV modules for two different configurations. The mismatch loss, shading loss, fill factor and the power loss due to the failure in tracking of the global maximum power point, of a series string with bypass diodes and short parallel string are analysed using MATLAB/Simulink model. The performance of the system is investigated for three different conditions of solar insolation for the same shading pattern. Results indicate that there is considerable power loss due to shading in a series string during PS than in a parallel string with same number of modules.

  8. Application of Chang's attenuation correction technique for single-photon emission computed tomography partial angle acquisition of Jaszczak phantom

    PubMed Central

    Saha, Krishnendu; Hoyt, Sean C.; Murray, Bryon M.

    2016-01-01

    The acquisition and processing of the Jaszczak phantom is a recommended test by the American College of Radiology for evaluation of gamma camera system performance. To produce the reconstructed phantom image for quality evaluation, attenuation correction is applied. The attenuation of counts originating from the center of the phantom is greater than that originating from the periphery of the phantom causing an artifactual appearance of inhomogeneity in the reconstructed image and complicating phantom evaluation. Chang's mathematical formulation is a common method of attenuation correction applied on most gamma cameras that do not require an external transmission source such as computed tomography, radionuclide sources installed within the gantry of the camera or a flood source. Tomographic acquisition can be obtained in two different acquisition modes for dual-detector gamma camera; one where the two detectors are at 180° configuration and acquire projection images for a full 360°, and the other where the two detectors are positioned at a 90° configuration and acquire projections for only 180°. Though Chang's attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor's camera software producing artifacts in the images. This work investigates whether Chang's attenuation correction technique can be applied to both acquisition modes by the development of a Chang's formulation-based algorithm that is applicable to both modes. Assessment of attenuation correction performance by phantom uniformity analysis illustrates improved uniformity with the proposed algorithm (22.6%) compared to the camera software (57.6%). PMID:27051167

  9. Application of Chang's attenuation correction technique for single-photon emission computed tomography partial angle acquisition of Jaszczak phantom.

    PubMed

    Saha, Krishnendu; Hoyt, Sean C; Murray, Bryon M

    2016-01-01

    The acquisition and processing of the Jaszczak phantom is a recommended test by the American College of Radiology for evaluation of gamma camera system performance. To produce the reconstructed phantom image for quality evaluation, attenuation correction is applied. The attenuation of counts originating from the center of the phantom is greater than that originating from the periphery of the phantom causing an artifactual appearance of inhomogeneity in the reconstructed image and complicating phantom evaluation. Chang's mathematical formulation is a common method of attenuation correction applied on most gamma cameras that do not require an external transmission source such as computed tomography, radionuclide sources installed within the gantry of the camera or a flood source. Tomographic acquisition can be obtained in two different acquisition modes for dual-detector gamma camera; one where the two detectors are at 180° configuration and acquire projection images for a full 360°, and the other where the two detectors are positioned at a 90° configuration and acquire projections for only 180°. Though Chang's attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor's camera software producing artifacts in the images. This work investigates whether Chang's attenuation correction technique can be applied to both acquisition modes by the development of a Chang's formulation-based algorithm that is applicable to both modes. Assessment of attenuation correction performance by phantom uniformity analysis illustrates improved uniformity with the proposed algorithm (22.6%) compared to the camera software (57.6%). PMID:27051167

  10. Morphological Awareness in Vocabulary Acquisition among Chinese-Speaking Children: Testing Partial Mediation via Lexical Inference Ability

    ERIC Educational Resources Information Center

    Zhang, Haomin

    2015-01-01

    The goal of this study was to investigate the effect of Chinese-specific morphological awareness on vocabulary acquisition among young Chinese-speaking students. The participants were 288 Chinese-speaking second graders from three different cities in China. Multiple regression analysis and mediation analysis were used to uncover the mediated and…

  11. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... acquisitions in connection with the formation of certain joint ventures or other corporations. (a) Whenever one or more of the contributors in the formation of a joint venture or other corporation which otherwise... connection with the formation of certain joint ventures or other corporations. 802.42 Section...

  12. Two parallel pathways for ferric and ferrous iron acquisition support growth and virulence of the intracellular pathogen Francisella tularensis Schu S4.

    PubMed

    Pérez, Natalie; Johnson, Richard; Sen, Bhaswati; Ramakrishnan, Girija

    2016-06-01

    Iron acquisition mechanisms in Francisella tularensis, the causative agent of tularemia, include the Francisella siderophore locus (fsl) siderophore operon and a ferrous iron-transport system comprising outer-membrane protein FupA and inner-membrane transporter FeoB. To characterize these mechanisms and to identify any additional iron uptake systems in the virulent subspecies tularensis, single and double deletions were generated in the fsl and feo iron acquisition systems of the strain Schu S4. Deletion of the entire fsl operon caused loss of siderophore production that could be restored by complementation with the biosynthetic genes fslA and fslC and Major Facilitator Superfamily (MFS) transporter gene fslB. (55) Fe-transport assays demonstrated that siderophore-iron uptake required the receptor FslE and MFS transporter FslD. A ΔfeoB' mutation resulted in loss of ability to transport ferrous iron ((55) Fe(2+) ). A ΔfeoB' ΔfslA mutant that required added exogenous siderophore for growth in vitro was unable to grow within tissue culture cells and was avirulent in mice, indicating that no compensatory cryptic iron uptake systems were induced in vivo. These studies demonstrate that the fsl and feo pathways function independently and operate in parallel to effectively support virulence of F. tularensis. PMID:26918301

  13. A Performance Comparison of the Parallel Preconditioners for Iterative Methods for Large Sparse Linear Systems Arising from Partial Differential Equations on Structured Grids

    NASA Astrophysics Data System (ADS)

    Ma, Sangback

    In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The

  14. Solitary Sound Play during Acquisition of English Vocalizations by an African Grey Parrot (Psittacus Erithacus): Possible Parallels with Children's Monologue Speech.

    ERIC Educational Resources Information Center

    Pepperberg, Irene M.; And Others

    1991-01-01

    Examines one component of an African Grey parrot's monologue behavior, private speech, while he was being taught new vocalizations. The data are discussed in terms of the possible functions of monologues during the parrot's acquisition of novel vocalizations. (85 references) (GLR)

  15. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  16. Characterization of high resolution MR images reconstructed by a GRAPPA based parallel technique

    NASA Astrophysics Data System (ADS)

    Banerjee, Suchandrima; Majumdar, Sharmila

    2006-03-01

    This work implemented an auto-calibrating parallel imaging technique and applied it to in vivo magnetic resonance imaging (MRI) of trabecular bone micro-architecture. A Generalized auto-calibrating partially parallel acquisition (GRAPPA) based reconstruction technique using modified robust data fitting was developed. The MR data was acquired with an eight channel phased array receiver on three normal volunteers on a General Electric 3 Tesla scanner. Microstructures comprising the trabecular bone architecture are of the order of 100 microns and hence their depiction requires very high imaging resolution. This work examined the effects of GRAPPA based parallel imaging on signal and noise characteristics and effective spatial resolution in high resolution (HR) images, for the range of undersampling or reduction factors 2-4. Additionally quantitative analysis was performed to obtain structural measures of trabecular bone from the images. Image quality in terms of contrast and depiction of structures was maintained in parallel images for reduction factors up to 3. Comparison between regular and parallel images suggested similar spatial resolution for both. However differences in noise characteristics in parallel images compared to regular images affected the threshholding based quantification. This suggested that GRAPPA based parallel images might require different analysis techniques. In conclusion, the study showed the feasibility of using parallel imaging techniques in HR-MRI of trabecular bone, although quantification strategies will have to be further investigated. Reduction of acquisition time using parallel techniques can improve the clinical feasibility of MRI of trabecular bone for prognosis and staging of the skeletal disorder osteoporosis.

  17. Acquired resistance to zoledronic acid and the parallel acquisition of an aggressive phenotype are mediated by p38-MAP kinase activation in prostate cancer cells

    PubMed Central

    Milone, M R; Pucci, B; Bruzzese, F; Carbone, C; Piro, G; Costantini, S; Capone, F; Leone, A; Di Gennaro, E; Caraglia, M; Budillon, A

    2013-01-01

    resistance, as well as in the acquisition of a more aggressive and invasive phenotype. PMID:23703386

  18. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  19. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  20. HYPERCP data acquisition system

    SciTech Connect

    Kaplan, D.M.; Luebke, W.R.; Chakravorty, A.

    1997-12-31

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data axe acquired from the front-end digitization systems at {approx} 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of {approx} 1 {mu}s per event, allowing operation at 75-kHz trigger rate with {approx_lt}30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates.

  1. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  2. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  3. Super-resolved parallel MRI by spatiotemporal encoding.

    PubMed

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2014-01-01

    Recent studies described an "ultrafast" scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive ultrafast MRI acquisition alternative, entails exploiting parallel imaging algorithms without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view, together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. This approach enables one to reduce both the excitation and acquisition times of sub-second SPEN acquisitions by the customary acceleration factor R, without compromises in either the method's spatial resolution, SAR deposition, or capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored and corroborated on phantoms and human volunteers at 3 T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  4. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  5. Active catheter tracking using parallel MRI and real-time image reconstruction.

    PubMed

    Bock, Michael; Müller, Sven; Zuehlsdorff, Sven; Speier, Peter; Fink, Christian; Hallscheidt, Peter; Umathum, Reiner; Semmler, Wolfhard

    2006-06-01

    In this work active MR catheter tracking with automatic slice alignment was combined with an autocalibrated parallel imaging technique. Using an optimized generalized autocalibrating partially parallel acquisitions (GRAPPA) algorithm with an acceleration factor of 2, we were able to reduce the acquisition time per image by 34%. To accelerate real-time GRAPPA image reconstruction, the coil sensitivities were updated only after slice reorientation. For a 2D trueFISP acquisition (160 x 256 matrix, 80% phase matrix, half Fourier acquisition, TR = 3.7 ms, GRAPPA factor = 2) real-time image reconstruction was achieved with up to six imaging coils. In a single animal experiment the method was used to steer a catheter from the vena cava through the beating heart into the pulmonary vasculature at an image update rate of about five images per second. Under all slice orientations, parallel image reconstruction was accomplished with only minor image artifacts, and the increased temporal resolution provided a sharp delineation of intracardial structures, such as the papillary muscle. PMID:16683261

  6. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  7. Second Language Acquisition: Possible Insights from Studies on How Birds Acquire Song.

    ERIC Educational Resources Information Center

    Neapolitan, Denise M.; And Others

    1988-01-01

    Reviews research that demonstrates parallels between general linguistic and cognitive processes in human language acquisition and avian acquisition of song and discusses how such research may provide new insights into the processes of second-language acquisition. (Author/CB)

  8. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  9. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

  10. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer`s task easier.

  11. Nuclear norm-regularized k-space-based parallel imaging reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Lin; Liu, Xiaoyun

    2014-04-01

    Parallel imaging reconstruction suffers from serious noise amplification at high accelerations that can be alleviated with regularization by imposing some prior information or constraints on image. Nevertheless, point-wise interpolation of missing k-space data restricts the use of prior information in k-space-based parallel imaging reconstructions like generalized auto-calibrating partial acquisitions (GRAPPA). In this study, a regularized k-space based parallel imaging reconstruction is presented. We first formulate the reconstruction of missing data within a patch as a linear inverse problem. Instead of exploiting prior information on image or its transform domain, the proposed method exploits the rank deficiency of structured matrix consisting of vectorized patches form entire k-space, which leads to a nuclear norm-regularized problem solved by the numeric algorithms iteratively. Brain imaging studies are performed, demonstrating that the proposed method is capable of mitigating noise at high accelerations in GRAPPA reconstruction.

  12. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  13. Seeing in parallel

    SciTech Connect

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.

    1988-01-01

    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  14. The Chateau de Cristal data acquisition system

    SciTech Connect

    Villard, M.M.

    1987-08-01

    This data acquisition system is built on several dedicated data transfer buses: ADC data readout through the FERA bus, parallel data processing in two VME crates. High data rates and selectivities are performed via this acquisition structure and new developed processing units. The system modularity allows various experiments with additional detectors.

  15. SSC/BCD data acquisition system proposal

    SciTech Connect

    Barsotti, E.; Bowden, M.; Swoboda, C.

    1989-04-01

    The proposed new data acquisition system architecture takes event fragments off a detector over fiber optics and to a parallel event building switch. The parallel event building switch concept, taken from the telephone communications industry, along with expected technology improvements in fiber-optic data transmission speeds over the next few years, should allow data acquisition system rates to increase dramatically and exceed those rates needed for the SSC. This report briefly describes the switch architecture and fiber optics for a SSC data acquisition system.

  16. Single echo acquisition MRI using RF encoding.

    PubMed

    Wright, Steven M; McDougall, Mary Preston

    2009-11-01

    Encoding of spatial information in magnetic resonance imaging is conventionally accomplished by using magnetic field gradients. During gradient encoding, the position in k-space is determined by a time-integral of the gradient field, resulting in a limitation in imaging speed due to either gradient power or secondary effects such as peripheral nerve stimulation. Partial encoding of spatial information through the sensitivity patterns of an array of coils, known as parallel imaging, is widely used to accelerate the imaging, and is complementary to gradient encoding. This paper describes the one-dimensional limit of parallel imaging in which all spatial localization in one dimension is performed through encoding by the radiofrequency (RF) coil. Using a one-dimensional array of long and narrow parallel elements to localize the image information in one direction, an entire image is obtained from a single line of k-space, avoiding rapid or repeated manipulation of gradients. The technique, called single echo acquisition (SEA) imaging, is described, along with the need for a phase compensation gradient pulse to counteract the phase variation contained in the RF coil pattern which would otherwise cause signal cancellation in each imaging voxel. Image reconstruction and resolution enhancement methods compatible with the speed of the technique are discussed. MR movies at frame rates of 125 frames per second are demonstrated, illustrating the ability to monitor the evolution of transverse magnetization to steady state during an MR experiment as well as demonstrating the ability to image rapid motion. Because this technique, like all RF encoding approaches, relies on the inherent spatially varying pattern of the coil and is not a time-integral, it should enable new applications for MRI that were previously inaccessible due to speed constraints, and should be of interest as an approach to extending the limits of detection in MR imaging. PMID:19441080

  17. Acquisition strategies

    SciTech Connect

    Zimmer, M.J.; Lynch, P.W. )

    1993-11-01

    Acquiring projects takes careful planning, research and consideration. Picking the right opportunities and avoiding the pitfalls will lead to a more valuable portfolio. This article describes the steps to take in evaluating an acquisition and what items need to be considered in an evaluation.

  18. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  19. Epilepsy (partial)

    PubMed Central

    2011-01-01

    Introduction About 3% of people will be diagnosed with epilepsy during their lifetime, but about 70% of people with epilepsy eventually go into remission. Methods and outcomes We conducted a systematic review and aimed to answer the following clinical questions: What are the effects of starting antiepileptic drug treatment following a single seizure? What are the effects of drug monotherapy in people with partial epilepsy? What are the effects of additional drug treatments in people with drug-resistant partial epilepsy? What is the risk of relapse in people in remission when withdrawing antiepileptic drugs? What are the effects of behavioural and psychological treatments for people with epilepsy? What are the effects of surgery in people with drug-resistant temporal lobe epilepsy? We searched: Medline, Embase, The Cochrane Library, and other important databases up to July 2009 (Clinical Evidence reviews are updated periodically; please check our website for the most up-to-date version of this review). We included harms alerts from relevant organisations such as the US Food and Drug Administration (FDA) and the UK Medicines and Healthcare products Regulatory Agency (MHRA). Results We found 83 systematic reviews, RCTs, or observational studies that met our inclusion criteria. We performed a GRADE evaluation of the quality of evidence for interventions. Conclusions In this systematic review we present information relating to the effectiveness and safety of the following interventions: antiepileptic drugs after a single seizure; monotherapy for partial epilepsy using carbamazepine, gabapentin, lamotrigine, levetiracetam, phenobarbital, phenytoin, sodium valproate, or topiramate; addition of second-line drugs for drug-resistant partial epilepsy (allopurinol, eslicarbazepine, gabapentin, lacosamide, lamotrigine, levetiracetam, losigamone, oxcarbazepine, retigabine, tiagabine, topiramate, vigabatrin, or zonisamide); antiepileptic drug withdrawal for people with partial or

  20. The myth of data acquisition rate.

    PubMed

    Felinger, Attila; Kilár, Anikó; Boros, Borbála

    2015-01-01

    With the need for high-frequency data acquisition, the influence of the data acquisition rate on the quality of the digitized signal is often discussed and also misinterpreted. In this study we show that undersampling of the signal, i.e. low data acquisition rate will not cause band broadening. Users of modern instrumentation and authors are frequently misled by hidden features of the data handling software they use. Very often users are unaware of the noise filtering algorithms that run parallel with data acquisition and that lack of information misleads them. We also demonstrate that undersampled signals can be restored by a proper trigonometric interpolation. PMID:25479882

  1. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  2. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  3. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  4. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  5. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  6. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  7. Microcomputer data acquisition and control.

    PubMed

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence. PMID:3805859

  8. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  9. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  10. Syntax acquisition.

    PubMed

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  11. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  12. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  13. The Spirituality of Second Language Acquisition

    ERIC Educational Resources Information Center

    Jackson, Baxter

    2006-01-01

    Parallels between the reconstruction of self in Alcoholics Anonymous and the reconstruction of self in second language acquisition are drawn out and examined in three areas: ego deflation, identification at depth, and mutual assistance. These spiritual principles are shown to be theoretically and empirically supported in SLA literature and…

  14. The Nexus task-parallel runtime system

    SciTech Connect

    Foster, I.; Tuecke, S.; Kesselman, C.

    1994-12-31

    A runtime system provides a parallel language compiler with an interface to the low-level facilities required to support interaction between concurrently executing program components. Nexus is a portable runtime system for task-parallel programming languages. Distinguishing features of Nexus include its support for multiple threads of control, dynamic processor acquisition, dynamic address space creation, a global memory model via interprocessor references, and asynchronous events. In addition, it supports heterogeneity at multiple levels, allowing a single computation to utilize different programming languages, executables, processors, and network protocols. Nexus is currently being used as a compiler target for two task-parallel languages: Fortran M and Compositional C++. In this paper, we present the Nexus design, outline techniques used to implement Nexus on parallel computers, show how it is used in compilers, and compare its performance with that of another runtime system.

  15. Data acquisition system for SLD

    SciTech Connect

    Sherden, D.J.

    1985-05-01

    This paper describes the data acquisition system planned for the SLD detector which is being constructed for use with the SLAC Linear Collider (SLC). An exclusively FASTBUS front-end system is used together with a VAX-based host system. While the volume of data transferred does not challenge the band-width capabilities of FASTBUS, extensive use is made of the parallel processing capabilities allowed by FASTBUS to reduce the data to a size which can be handled by the host system. The low repetition rate of the SLC allows a relatively simple software-based trigger. The principal components and overall architecture of the hardware and software are described.

  16. Investigating Second Language Acquisition.

    ERIC Educational Resources Information Center

    Jordens, Peter, Ed.; Lalleman, Josine, Ed.

    Essays in second language acquisition include: "The State of the Art in Second Language Acquisition Research" (Josine Lalleman); "Crosslinguistic Influence with Special Reference to the Acquisition of Grammar" (Michael Sharwood Smith); "Second Language Acquisition by Adult Immigrants: A Multiple Case Study of Turkish and Moroccan Learners of…

  17. Parallel solution of partial differential equations by extrapolation methods

    SciTech Connect

    Leland, Robert W.; Rollett, J. S.

    2015-02-01

    We have found, in the ROGE algorithm, an extrapolation process which is robust, effective and practically simple to implement. It removes the difficulty of needing to make a precise estimate of the over-relaxation parameter for Successive Over-Relaxation (SOR) type methods.

  18. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  19. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  20. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction.

    PubMed

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI. PMID:26448064

  1. Parallel Acquisition of Awareness and Differential Delay Eyeblink Conditioning

    ERIC Educational Resources Information Center

    Weidemann, Gabrielle; Antees, Cassandra

    2012-01-01

    There is considerable debate about whether differential delay eyeblink conditioning can be acquired without awareness of the stimulus contingencies. Previous investigations of the relationship between differential-delay eyeblink conditioning and awareness of the stimulus contingencies have assessed awareness after the conditioning session was…

  2. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  3. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  4. Color Vision Deficits and Literacy Acquisition.

    ERIC Educational Resources Information Center

    Hurley, Sandra Rollins

    1994-01-01

    Shows that color blindness, whether partial or total, inhibits literacy acquisition. Offers a case study of a third grader with impaired color vision. Presents a review of literature on the topic. Notes that people with color vision deficits are often unaware of the handicap. (RS)

  5. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  6. Acquisition of Three Word Knowledge Aspects through Reading

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2016-01-01

    A number of studies have shown that second or foreign language learners can acquire vocabulary through reading. The aim of the study was to investigate (a) the effects of reading an authentic novel on the acquisition of 3 aspects of word knowledge: spelling, meaning, and collocation; (b) the influence of reading on the acquisition of partial and…

  7. Parallel genotypic adaptation: when evolution repeats itself

    PubMed Central

    Wood, Troy E.; Burke, John M.; Rieseberg, Loren H.

    2008-01-01

    Until recently, parallel genotypic adaptation was considered unlikely because phenotypic differences were thought to be controlled by many genes. There is increasing evidence, however, that phenotypic variation sometimes has a simple genetic basis and that parallel adaptation at the genotypic level may be more frequent than previously believed. Here, we review evidence for parallel genotypic adaptation derived from a survey of the experimental evolution, phylogenetic, and quantitative genetic literature. The most convincing evidence of parallel genotypic adaptation comes from artificial selection experiments involving microbial populations. In some experiments, up to half of the nucleotide substitutions found in independent lineages under uniform selection are the same. Phylogenetic studies provide a means for studying parallel genotypic adaptation in non-experimental systems, but conclusive evidence may be difficult to obtain because homoplasy can arise for other reasons. Nonetheless, phylogenetic approaches have provided evidence of parallel genotypic adaptation across all taxonomic levels, not just microbes. Quantitative genetic approaches also suggest parallel genotypic evolution across both closely and distantly related taxa, but it is important to note that this approach cannot distinguish between parallel changes at homologous loci versus convergent changes at closely linked non-homologous loci. The finding that parallel genotypic adaptation appears to be frequent and occurs at all taxonomic levels has important implications for phylogenetic and evolutionary studies. With respect to phylogenetic analyses, parallel genotypic changes, if common, may result in faulty estimates of phylogenetic relationships. From an evolutionary perspective, the occurrence of parallel genotypic adaptation provides increasing support for determinism in evolution and may provide a partial explanation for how species with low levels of gene flow are held together. PMID:15881688

  8. Photon detection with parallel asynchronous processing

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1990-01-01

    An approach to photon detection with a parallel asynchronous signal processor is described. The visible or IR photon-detection capability of the silicon p(+)-n-n(+) detectors and the parallel asynchronous processing are addressed separately. This approach would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture consisting of a stack of planar arrays of the devices would form a 2D array processor with a 2D array of inputs located directly behind a focal-plane detector array. A 2D image data stream would propagate in neuronlike asynchronous pulse-coded form through the laminar processor. Such systems can integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The possibility of multispectral image processing is addressed.

  9. Eclipse Parallel Tools Platform

    SciTech Connect

    Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

  10. Surface acquisition through virtual milling

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Surface acquisition deals with the reconstruction of three dimensional objects from a set of data points. The most straightforward techniques require human intervention, a time consuming proposition. It is desirable to develop a fully automated alternative. Such a method is proposed in this paper. It makes use of surface measurements obtained from a 3-D laser digitizer - an instrument which provides the (x,y,z) coordinates of surface data points from various viewpoints. These points are assembled into several partial surfaces using a visibility constraint and a 2-D triangulation technique. Reconstruction of the final object requires merging these partial surfaces. This is accomplished through a procedure that emulates milling, a standard machining operation. From a geometrical standpoint the problem reduces to constructing the intersection of two or more non-convex polyhedra.

  11. Language Acquisition without an Acquisition Device

    ERIC Educational Resources Information Center

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  12. Clonidine reverses spatial learning deficits and reinstates theta frequencies in rats with partial fornix section.

    PubMed

    Ammassari-Teule, M; Maho, C; Sara, S J

    1991-10-25

    Rats received knife-cuts to the dorsal fornix or sham-operations. Half of the animals from each group were injected with clonidine (0.01 mg/kg) and the others with saline before each daily trail of a 10-trial radial 8-arm maze task. The number of choices before the first repetition and the run time were used as performance indices. Lesioned rats were significantly impaired in the acquisition of this task. Clonidine-treated rats, lesioned or not, had an acquisition profile indistinguishable from that of sham-operated saline-injected rats, in spite of their increased run time. When tested one week after the last learning trial in a no-drug condition, lesioned rats treated with clonidine throughout learning maintained a high level of performance during the 5-day retraining phase. A parallel analysis of theta rhythms recorded in an independent group of rats placed in equivalent treatment and/or lesion conditions was then performed. Preoperatively, clonidine injections decreased theta frequency during both alert immobility and movement. Partial fornix lesions produced an increase in theta frequency. Finally, clonidine in fornix-damaged rats decreased theta frequency, thus reinstating the postoperative values at a level statistically no different from that recorded preoperatively. The role of clonidine in restoring the function of the septo-hippocampal input in partially fornix-damaged rats through a noradrenergic modulation of hippocampal acetylcholine release is discussed. PMID:1662515

  13. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  14. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  15. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  16. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  17. Survival of the Partial Reinforcement Extinction Effect after Contextual Shifts

    ERIC Educational Resources Information Center

    Boughner, Robert L.; Papini, Mauricio R.

    2006-01-01

    The effects of contextual shifts on the partial reinforcement extinction effect (PREE) were studied in autoshaping with rats. Experiment 1 established that the two contexts used subsequently were easily discriminable and equally salient. In Experiment 2, independent groups of rats received acquisition training under partial reinforcement (PRF) or…

  18. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Partial set-asides. 219..., DEPARTMENT OF DEFENSE SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 219.502-3 Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of...

  19. Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging

    PubMed Central

    Truong, Trong-Kha; Song, Allen W.; Chen, Nan-kuei

    2015-01-01

    In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2∗-weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed. PMID:26413505

  20. EARLY SYNTACTIC ACQUISITION.

    ERIC Educational Resources Information Center

    KELLEY, K.L.

    THIS PAPER IS A STUDY OF A CHILD'S EARLIEST PRETRANSFORMATIONAL LANGUAGE ACQUISITION PROCESSES. A MODEL IS CONSTRUCTED BASED ON THE ASSUMPTIONS (1) THAT SYNTACTIC ACQUISITION OCCURS THROUGH THE TESTING OF HYPOTHESES REFLECTING THE INITIAL STRUCTURE OF THE ACQUISITION MECHANISM AND THE LANGUAGE DATA TO WHICH THE CHILD IS EXPOSED, AND (2) THAT…

  1. The HyperCP data acquisition system

    SciTech Connect

    Kaplan, D.M.; E871 Collaboration

    1997-06-01

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data are acquired from the front-end digitization systems at {approx} 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of {approx} 1 {micro}s per event, allowing operation at 75-kHz trigger rate with {approx_lt}30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates.

  2. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  3. 48 CFR 52.219-7 - Notice of Partial Small Business Set-Aside.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Notice of Partial Small Business Set-Aside. 52.219-7 Section 52.219-7 Federal Acquisition Regulations System FEDERAL ACQUISITION... Federal Prison Industries, Inc., will be solicited and considered for both the set-aside and...

  4. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  5. Parallel preconditioning techniques for sparse CG solvers

    SciTech Connect

    Basermann, A.; Reichel, B.; Schelthoff, C.

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  6. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  7. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  8. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  9. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  10. Partial (focal) seizure

    MedlinePlus

    ... Jacksonian seizure; Seizure - partial (focal); Temporal lobe seizure; Epilepsy - partial seizures ... Abou-Khalil BW, Gallagher MJ, Macdonald RL. Epilepsies. In: Daroff ... Practice . 7th ed. Philadelphia, PA: Elsevier; 2016:chap 101. ...

  11. Partial (focal) seizure

    MedlinePlus

    ... Jacksonian seizure; Seizure - partial (focal); Temporal lobe seizure; Epilepsy - partial seizures ... Abou-Khalil BW, Gallagher MJ, Macdonald RL. Epilepsies. In: Daroff RB, ... 6th ed. Philadelphia, PA: Elsevier Saunders; 2012:chap 67. ...

  12. Partial tooth gear bearings

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    2010-01-01

    A partial gear bearing including an upper half, comprising peak partial teeth, and a lower, or bottom, half, comprising valley partial teeth. The upper half also has an integrated roller section between each of the peak partial teeth with a radius equal to the gear pitch radius of the radially outwardly extending peak partial teeth. Conversely, the lower half has an integrated roller section between each of the valley half teeth with a radius also equal to the gear pitch radius of the peak partial teeth. The valley partial teeth extend radially inwardly from its roller section. The peak and valley partial teeth are exactly out of phase with each other, as are the roller sections of the upper and lower halves. Essentially, the end roller bearing of the typical gear bearing has been integrated into the normal gear tooth pattern.

  13. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  14. Excessive acquisition in hoarding.

    PubMed

    Frost, Randy O; Tolin, David F; Steketee, Gail; Fitch, Kristin E; Selbo-Bruns, Alexandra

    2009-06-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an Internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms. PMID:19261435

  15. Excessive Acquisition in Hoarding

    PubMed Central

    Frost, Randy O.; Tolin, David F.; Steketee, Gail; Fitch, Kristin E.; Selbo-Bruns, Alexandra

    2009-01-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms. PMID:19261435

  16. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  17. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  18. Streamlined acquisition handbook

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA has always placed great emphasis on the acquisition process, recognizing it as among its most important activities. This handbook is intended to facilitate the application of streamlined acquisition procedures. The development of these procedures reflects the efforts of an action group composed of NASA Headquarters and center acquisition professionals. It is the intent to accomplish the real change in the acquisition process as a result of this effort. An important part of streamlining the acquisition process is a commitment by the people involved in the process to accomplishing acquisition activities quickly and with high quality. Too often we continue to accomplish work in 'the same old way' without considering available alternatives which would require no changes to regulations, approvals from Headquarters, or waivers of required practice. Similarly, we must be sensitive to schedule opportunities throughout the acquisition cycle, not just once the purchase request arrives at the procurement office. Techniques that have been identified as ways of reducing acquisition lead time while maintaining high quality in our acquisition process are presented.

  19. Parallel system simulation

    SciTech Connect

    Tai, H.M.; Saeks, R.

    1984-03-01

    A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

  20. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  1. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  2. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  3. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  4. Partitioning and parallel radiosity

    NASA Astrophysics Data System (ADS)

    Merzouk, S.; Winkler, C.; Paul, J. C.

    1996-03-01

    This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

  5. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  6. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  7. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  8. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  9. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  10. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  11. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  12. Multiple channel data acquisition system

    DOEpatents

    Crawley, H. Bert; Rosenberg, Eli I.; Meyer, W. Thomas; Gorbics, Mark S.; Thomas, William D.; McKay, Roy L.; Homer, Jr., John F.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler.

  13. Multiple channel data acquisition system

    DOEpatents

    Crawley, H.B.; Rosenberg, E.I.; Meyer, W.T.; Gorbics, M.S.; Thomas, W.D.; McKay, R.L.; Homer, J.F. Jr.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler. 25 figs.

  14. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  15. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  16. Acquisition of teleological descriptions

    NASA Astrophysics Data System (ADS)

    Franke, David W.

    1992-03-01

    Teleology descriptions capture the purpose of an entity, mechanism, or activity with which they are associated. These descriptions can be used in explanation, diagnosis, and design reuse. We describe a technique for acquiring teleological descriptions expressed in the teleology language TeD. Acquisition occurs during design by observing design modifications and design verification. We demonstrate the acquisition technique in an electronic circuit design.

  17. Coring Sample Acquisition Tool

    NASA Technical Reports Server (NTRS)

    Haddad, Nicolas E.; Murray, Saben D.; Walkemeyer, Phillip E.; Badescu, Mircea; Sherrit, Stewart; Bao, Xiaoqi; Kriechbaum, Kristopher L.; Richardson, Megan; Klein, Kerry J.

    2012-01-01

    A sample acquisition tool (SAT) has been developed that can be used autonomously to sample drill and capture rock cores. The tool is designed to accommodate core transfer using a sample tube to the IMSAH (integrated Mars sample acquisition and handling) SHEC (sample handling, encapsulation, and containerization) without ever touching the pristine core sample in the transfer process.

  18. Comparison of Parallel MRI Reconstruction Methods for Accelerated 3D Fast Spin-Echo Imaging

    PubMed Central

    Xiao, Zhikui; Hoge, W. Scott; Mulkern, R.V.; Zhao, Lei; Hu, Guangshu; Kyriakos, Walid E.

    2014-01-01

    Parallel MRI (pMRI) achieves imaging acceleration by partially substituting gradient-encoding steps with spatial information contained in the component coils of the acquisition array. Variable-density subsampling in pMRI was previously shown to yield improved two-dimensional (2D) imaging in comparison to uniform subsampling, but has yet to be used routinely in clinical practice. In an effort to reduce acquisition time for 3D fast spin-echo (3D-FSE) sequences, this work explores a specific nonuniform sampling scheme for 3D imaging, subsampling along two phase-encoding (PE) directions on a rectilinear grid. We use two reconstruction methods—2D-GRAPPA-Operator and 2D-SPACE RIP—and present a comparison between them. We show that high-quality images can be reconstructed using both techniques. To evaluate the proposed sampling method and reconstruction schemes, results via simulation, phantom study, and in vivo 3D human data are shown. We find that fewer artifacts can be seen in the 2D-SPACE RIP reconstructions than in 2D-GRAPPA-Operator reconstructions, with comparable reconstruction times. PMID:18727083

  19. Detecting opportunities for parallel observations on the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Lucks, Michael

    1992-01-01

    The presence of multiple scientific instruments aboard the Hubble Space Telescope provides opportunities for parallel science, i.e., the simultaneous use of different instruments for different observations. Determining whether candidate observations are suitable for parallel execution depends on numerous criteria (some involving quantitative tradeoffs) that may change frequently. A knowledge based approach is presented for constructing a scoring function to rank candidate pairs of observations for parallel science. In the Parallel Observation Matching System (POMS), spacecraft knowledge and schedulers' preferences are represented using a uniform set of mappings, or knowledge functions. Assessment of parallel science opportunities is achieved via composition of the knowledge functions in a prescribed manner. The knowledge acquisition, and explanation facilities of the system are presented. The methodology is applicable to many other multiple criteria assessment problems.

  20. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  1. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1994-04-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.

  2. Pattern recognition with parallel associative memory

    NASA Technical Reports Server (NTRS)

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  3. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  4. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  5. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  6. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  7. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  8. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  9. Interactive knowledge acquisition tools

    NASA Technical Reports Server (NTRS)

    Dudziak, Martin J.; Feinstein, Jerald L.

    1987-01-01

    The problems of designing practical tools to aid the knowledge engineer and general applications used in performing knowledge acquisition tasks are discussed. A particular approach was developed for the class of knowledge acquisition problem characterized by situations where acquisition and transformation of domain expertise are often bottlenecks in systems development. An explanation is given on how the tool and underlying software engineering principles can be extended to provide a flexible set of tools that allow the application specialist to build highly customized knowledge-based applications.

  10. Human target acquisition performance

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Du Bosq, Todd W.; Reynolds, Joseph P.; Thompson, Roger; Aghera, Sameer; Moyer, Steven K.; Flug, Eric; Espinola, Richard; Hixson, Jonathan

    2012-06-01

    The battlefield has shifted from armored vehicles to armed insurgents. Target acquisition (identification, recognition, and detection) range performance involving humans as targets is vital for modern warfare. The acquisition and neutralization of armed insurgents while at the same time minimizing fratricide and civilian casualties is a mounting concern. U.S. Army RDECOM CERDEC NVESD has conducted many experiments involving human targets for infrared and reflective band sensors. The target sets include human activities, hand-held objects, uniforms & armament, and other tactically relevant targets. This paper will define a set of standard task difficulty values for identification and recognition associated with human target acquisition performance.

  11. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  12. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  13. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  14. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  15. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  16. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  17. Physics of Partially Ionized Plasmas

    NASA Astrophysics Data System (ADS)

    Krishan, Vinod

    2016-05-01

    Figures; Preface; 1. Partially ionized plasmas here and everywhere; 2. Multifluid description of partially ionized plasmas; 3. Equilibrium of partially ionized plasmas; 4. Waves in partially ionized plasmas; 5. Advanced topics in partially ionized plasmas; 6. Research problems in partially ionized plasmas; Supplementary matter; Index.

  18. Rx for Acquisitions Hangups

    ERIC Educational Resources Information Center

    Huleatt, Richard S.

    1973-01-01

    A system of ordering library materials efficiently, quickly and at low cost is presented. The procedure bypasses purchasing departments and helps reduce acquisitions time by authorizing direct ordering by the library. Forms and procedures used are discussed. (1 reference) (DH)

  19. Acquisition signal transmitter

    NASA Technical Reports Server (NTRS)

    Friedman, Morton L. (Inventor)

    1989-01-01

    An encoded information transmitter which transmits a radio frequency carrier that is amplitude modulated by a constant frequency waveform and thereafter amplitude modulated by a predetermined encoded waveform, the constant frequency waveform modulated carrier constituting an acquisition signal and the encoded waveform modulated carrier constituting an information bearing signal, the acquisition signal providing enhanced signal acquisition and interference rejection favoring the information bearing signal. One specific application for this transmitter is as a distress transmitter where a conventional, legislated audio tone modulated signal is transmitted followed first by the acquisition signal and then the information bearing signal, the information bearing signal being encoded with, among other things, vehicle identification data. The acquistion signal enables a receiver to acquire the information bearing signal where the received signal is low and/or where the received signal has a low signal-to-noise ratio in an environment where there are multiple signals in the same frequency band as the information bearing signal.

  20. High Speed data acquisition

    SciTech Connect

    Cooper, Peter S.

    1998-02-01

    A general introduction to high Speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E781) high statistics charmed baryon production and decay experiment now taking data at Fermilab.

  1. Documentation and knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel; Moseley, Warren

    1990-01-01

    Traditional approaches to knowledge acquisition have focused on interviews. An alternative focuses on the documentation associated with a domain. Adopting a documentation approach provides some advantages during familiarization. A knowledge management tool was constructed to gain these advantages.

  2. Data acquisition system

    DOEpatents

    Shapiro, Stephen L.; Mani, Sudhindra; Atlas, Eugene L.; Cords, Dieter H. W.; Holbrook, Britt

    1997-01-01

    A data acquisition circuit for a particle detection system that allows for time tagging of particles detected by the system. The particle detection system screens out background noise and discriminate between hits from scattered and unscattered particles. The detection system can also be adapted to detect a wide variety of particle types. The detection system utilizes a particle detection pixel array, each pixel containing a back-biased PIN diode, and a data acquisition pixel array. Each pixel in the particle detection pixel array is in electrical contact with a pixel in the data acquisition pixel array. In response to a particle hit, the affected PIN diodes generate a current, which is detected by the corresponding data acquisition pixels. This current is integrated to produce a voltage across a capacitor, the voltage being related to the amount of energy deposited in the pixel by the particle. The current is also used to trigger a read of the pixel hit by the particle.

  3. FOS Target Acquisition Test

    NASA Astrophysics Data System (ADS)

    Koratkar, Anuradha

    1994-01-01

    FOS onboard target acquisition software capabilities will be verified by this test -- point source binary, point source firmware, point source peak-up, wfpc2 assisted realtime, point source peak-down, taled assisted binary, taled assisted firmware, and nth star binary modes. The primary modes are tested 3 times to determine repeatability. This test is the only test that will verify mode-to-mode acquisition offsets. This test has to be conducted for both the RED and BLUE detectors.

  4. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  5. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  6. Parallel Multigrid Equation Solver

    Energy Science and Technology Software Center (ESTSC)

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  7. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (ESTSC)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  8. Parallel Total Energy

    Energy Science and Technology Software Center (ESTSC)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  9. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  10. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  11. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  12. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  13. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  14. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, D.B.

    1994-07-19

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.

  15. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, Dario B.

    1994-01-01

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.

  16. Why arthroscopic partial meniscectomy?

    PubMed

    Lyu, Shaw-Ruey

    2015-09-01

    "Arthroscopic Partial Meniscectomy versus Sham Surgery for a Degenerative Meniscal Tear" published in the New England Journal of Medicine on December 26, 2013 draws the conclusion that arthroscopic partial medial meniscectomy provides no significant benefit over sham surgery in patients with a degenerative meniscal tear and no knee osteoarthritis. This result argues against the current practice of performing arthroscopic partial meniscectomy (APM) in patients with a degenerative meniscal tear. Since the number of APM performed has been increasing, the information provided by this study should lead to a change in clinical care of patients with a degenerative meniscus tear. PMID:26488013

  17. Generation of and Retraction from Cross-Linguistically Motivated Structures in Bilingual First Language Acquisition.

    ERIC Educational Resources Information Center

    Dopke, Susanne

    2000-01-01

    Focuses on unusual developmental structures during the simultaneous acquisition of German and English in early childhood, which were evident parallel to a majority of target structures. Explains the cognitive motivation for unusual acquisition structures as well as the eventual retraction from them. (Author/VWL)

  18. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  19. Twisted partially pure spinors

    NASA Astrophysics Data System (ADS)

    Herrera, Rafael; Tellez, Ivan

    2016-08-01

    Motivated by the relationship between orthogonal complex structures and pure spinors, we define twisted partially pure spinors in order to characterize spinorially subspaces of Euclidean space endowed with a complex structure.

  20. Partial knee replacement - slideshow

    MedlinePlus

    ... page: //medlineplus.gov/ency/presentations/100225.htm Partial knee replacement - series To use the sharing features on ... A.M. Editorial team. Related MedlinePlus Health Topics Knee Replacement A.D.A.M., Inc. is accredited ...

  1. Partial knee replacement

    MedlinePlus

    Most people recover quickly and have much less pain than they did before surgery. People who have a partial knee replacement recover faster than those who have a total knee replacement. Many people are able to walk ...

  2. Extendability of parallel sections in vector bundles

    NASA Astrophysics Data System (ADS)

    Kirschner, Tim

    2016-01-01

    I address the following question: Given a differentiable manifold M, what are the open subsets U of M such that, for all vector bundles E over M and all linear connections ∇ on E, any ∇-parallel section in E defined on U extends to a ∇-parallel section in E defined on M? For simply connected manifolds M (among others) I describe the entirety of all such sets U which are, in addition, the complement of a C1 submanifold, boundary allowed, of M. This delivers a partial positive answer to a problem posed by Antonio J. Di Scala and Gianni Manno (2014). Furthermore, in case M is an open submanifold of Rn, n ≥ 2, I prove that the complement of U in M, not required to be a submanifold now, can have arbitrarily large n-dimensional Lebesgue measure.

  3. Multiprocessor data acquisition for NordBall

    NASA Astrophysics Data System (ADS)

    Jerrestam, Dan; Forycki, A.; Holm, A.; Høy-Christensen, P.; Jian Shen, T.

    1989-12-01

    For the NordBall multidetector system a versatile data acquisition system has been developed around the VME bus utilizing 68010 processors. The readout of the instrument is based on a generalized READER concept. READERs are CPU boards reading hardware in parallel for each event. In the FERA bus the final fast logical decision is made before an event is to be considered as being present for readout. Synchronization with the trigger for readout, coming from the FERA bus system, is performed by a special hardware unit. Synchronization on event level between the READERs is done by the same hardware unit monitored by a master CPU.

  4. PARTIAL TORUS INSTABILITY

    SciTech Connect

    Olmedo, Oscar; Zhang Jie

    2010-07-20

    Flux ropes are now generally accepted to be the magnetic configuration of coronal mass ejections (CMEs), which may be formed prior to or during solar eruptions. In this study, we model the flux rope as a current-carrying partial torus loop with its two footpoints anchored in the photosphere, and investigate its stability in the context of the torus instability (TI). Previous studies on TI have focused on the configuration of a circular torus and revealed the existence of a critical decay index of the overlying constraining magnetic field. Our study reveals that the critical index is a function of the fractional number of the partial torus, defined by the ratio between the arc length of the partial torus above the photosphere and the circumference of a circular torus of equal radius. We refer to this finding as the partial torus instability (PTI). It is found that a partial torus with a smaller fractional number has a smaller critical index, thus requiring a more gradually decreasing magnetic field to stabilize the flux rope. On the other hand, a partial torus with a larger fractional number has a larger critical index. In the limit of a circular torus when the fractional number approaches 1, the critical index goes to a maximum value. We demonstrate that the PTI helps us to understand the confinement, growth, and eventual eruption of a flux-rope CME.

  5. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  6. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  7. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  8. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  9. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  10. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  11. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  12. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  13. Adaptive parallel logic networks

    SciTech Connect

    Martinez, T.R.; Vidal, J.J.

    1988-02-01

    This paper presents a novel class of special purpose processors referred to as ASOCS (adaptive self-organizing concurrent systems). Intended applications include adaptive logic devices, robotics, process control, system malfunction management, and in general, applications of logic reasoning. ASOCS combines massive parallelism with self-organization to attain a distributed mechanism for adaptation. The ASOCS approach is based on an adaptive network composed of many simple computing elements (nodes) which operate in a combinational and asynchronous fashion. Problem specification (programming) is obtained by presenting to the system if-then rules expressed as Boolean conjunctions. New rules are added incrementally. In the current model, when conflicts occur, precedence is given to the most recent inputs. With each rule, desired network response is simply presented to the system, following which the network adjusts itself to maintain consistency and parsimony of representation. Data processing and adaptation form two separate phases of operation. During processing, the network acts as a parallel hardware circuit. Control of the adaptive process is distributed among the network nodes and efficiently exploits parallelism.

  14. 76 FR 6003 - Defense Federal Acquisition Regulation Supplement; Marking of Government-Furnished Property

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ... administrator or other responsible Government official during a property management system analysis or audit... partial, advance, progress, or performance-based payments; (G) Intellectual property or software; or (H... 252 Defense Federal Acquisition Regulation Supplements; Marking of Government-Furnished...

  15. Reconfigurable Embedded System for Electrocardiogram Acquisition.

    PubMed

    Kay, Marcel Seiji; Iaione, Fábio

    2015-01-01

    Smartphones include features that offers the chance to develop mobile systems in medical field, resulting in an area called mobile-health. One of the most common medical examinations is the electrocardiogram (ECG), which allows the diagnosis of various heart diseases, leading to preventative measures and preventing more serious problems. The objective of this study was to develop a wireless reconfigurable embedded system using a FPAA (Field Programmable Analog Array), for the acquisition of ECG signals, and an application showing and storing these signals on Android smartphones. The application also performs the partial FPAA reconfiguration in real time (adjustable gain). Previous studies using FPAA usually use the development boards provided by the manufacturer (high cost), do not allow the reconfiguration in real time, use no smartphone and communicate via cables. The parameters tested in the acquisition circuit and the quality of ECGs registered in an individual were satisfactory. PMID:26262018

  16. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  17. On Shaft Data Acquisition System (OSDAS)

    NASA Technical Reports Server (NTRS)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  18. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  19. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  20. Partial spread OFDM

    NASA Astrophysics Data System (ADS)

    Elghariani, Ali; Zoltowski, Michael D.

    2012-05-01

    In this paper, partial spread OFDM system has been presented and its performance has been studied when different detection techniques are employed, such as minimum mean square error (MMSE), grouped Maximum Likelihood (ML) and approximated integer quadratic programming (IQP) techniques . The performance study also includes applying two different spreading matrices, Hadamard and Vandermonde. Extensive computer simulation have been implemented and important results show that partial spread OFDM system improves the BER performance and the frequency diversity of OFDM compared to both non spread and full spread systems. The results from this paper also show that partial spreading technique combined with suboptimal detector could be a better solution for applications that require low receiver complexity and high information detectability.

  1. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, S.; Kumar, R.; Krumpelt, M.

    1999-08-17

    A partial oxidation reformer is described comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell. 7 figs.

  2. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    1999-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  3. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    2001-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  4. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, S.; Kumar, R.; Krumpelt, M.

    1999-08-24

    A partial oxidation reformer is described comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell. 7 figs.

  5. Oxygen partial pressure sensor

    DOEpatents

    Dees, D.W.

    1994-09-06

    A method for detecting oxygen partial pressure and an oxygen partial pressure sensor are provided. The method for measuring oxygen partial pressure includes contacting oxygen to a solid oxide electrolyte and measuring the subsequent change in electrical conductivity of the solid oxide electrolyte. A solid oxide electrolyte is utilized that contacts both a porous electrode and a nonporous electrode. The electrical conductivity of the solid oxide electrolyte is affected when oxygen from an exhaust stream permeates through the porous electrode to establish an equilibrium of oxygen anions in the electrolyte, thereby displacing electrons throughout the electrolyte to form an electron gradient. By adapting the two electrodes to sense a voltage potential between them, the change in electrolyte conductivity due to oxygen presence can be measured. 1 fig.

  6. Oxygen partial pressure sensor

    DOEpatents

    Dees, Dennis W.

    1994-01-01

    A method for detecting oxygen partial pressure and an oxygen partial pressure sensor are provided. The method for measuring oxygen partial pressure includes contacting oxygen to a solid oxide electrolyte and measuring the subsequent change in electrical conductivity of the solid oxide electrolyte. A solid oxide electrolyte is utilized that contacts both a porous electrode and a nonporous electrode. The electrical conductivity of the solid oxide electrolyte is affected when oxygen from an exhaust stream permeates through the porous electrode to establish an equilibrium of oxygen anions in the electrolyte, thereby displacing electrons throughout the electrolyte to form an electron gradient. By adapting the two electrodes to sense a voltage potential between them, the change in electrolyte conductivity due to oxygen presence can be measured.

  7. Data Acquisition Backend

    SciTech Connect

    Britton Jr., Charles L.; Ezell, N. Dianne Bull; Roberts, Michael

    2013-10-01

    This document is intended to summarize the development and testing of the data acquisition module portion of the Johnson Noise Thermometry (JNT) system developed at ORNL. The proposed system has been presented in an earlier report [1]. A more extensive project background including the project rationale is available in the initial project report [2].

  8. [Acquisition of arithmetic knowledge].

    PubMed

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3). PMID:18198117

  9. Acquisition of Comparison Constructions

    ERIC Educational Resources Information Center

    Hohaus, Vera; Tiemann, Sonja; Beck, Sigrid

    2014-01-01

    This article presents a study on the time course of the acquisition of comparison constructions. The order in which comparison constructions (comparatives, measure phrases, superlatives, degree questions, etc.) show up in English- and German-learning children's spontaneous speech is quite fixed. It is shown to be insufficiently determined by…

  10. High Speed data acquisition

    SciTech Connect

    Cooper, P.S.

    1998-02-01

    A general introduction to high Speed data acquisition system techniques in modern particle physics experiments is given. Examples are drawn from the SELEX(E781) high statistics charmed baryon production and decay experiment now taking data at Fermilab. {copyright} {ital 1998 American Institute of Physics.}

  11. Following Native Language Acquisition.

    ERIC Educational Resources Information Center

    Neiburg, Michael S.

    Native language acquisition is a natural and non-natural stage-by-stage process. The natural first stage is development of speech and listening skills. In this stage, competency is gained in the home environment. The next, non-natural stage is development of literacy, a cultural skill taught in school. Since oral-aural native language development…

  12. Telecommunications and data acquisition

    NASA Technical Reports Server (NTRS)

    Renzetti, N. A. (Editor)

    1981-01-01

    Deep Space Network progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. In addition, developments in Earth based radio technology as applied to geodynamics, astrophysics, and the radio search for extraterrestrial intelligence are reported.

  13. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  14. Acquisitions List No. 42.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    The "Acquisitions List" of demographic books and articles is issued every two months by the Katharine Dexter McCormick Library. Divided into two parts, the first contains a list of books most recently acquired by the Library, each one annotated and also marked with the Library call number. The second part consists of a list of annotated articles,…

  15. Acquisitions List No. 43.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    The "Acquisitions List" of demographic books and articles is issued every two months by the Katharine Dexter McCormick Library. Divided into two parts, the first contains a list of books most recently acquired by the Library, each one annotated and also marked with the Library call number. The second part consists of a list of annotated articles,…

  16. 75 FR 25844 - Class Deviation From FAR 52.219-7, Notice of Partial Small Business Set-Aside

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-10

    ... of the Secretary Class Deviation From FAR 52.219-7, Notice of Partial Small Business Set-Aside AGENCY... class deviation to the Federal Acquisition Regulation (FAR) regarding partial small business set-asides... Clause 52.219-7, Notice of Partial Small Business Set-Aside. DESC intends to use the clause in...

  17. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  18. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  19. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  20. Fastpath Speculative Parallelization

    NASA Astrophysics Data System (ADS)

    Spear, Michael F.; Kelsey, Kirk; Bai, Tongxin; Dalessandro, Luke; Scott, Michael L.; Ding, Chen; Wu, Peng

    We describe Fastpath, a system for speculative parallelization of sequential programs on conventional multicore processors. Our system distinguishes between the lead thread, which executes at almost-native speed, and speculative threads, which execute somewhat slower. This allows us to achieve nontrivial speedup, even on two-core machines. We present a mathematical model of potential speedup, parameterized by application characteristics and implementation constants. We also present preliminary results gleaned from two different Fastpath implementations, each derived from an implementation of software transactional memory.

  1. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  2. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  3. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  4. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  5. Coordinating Council. Seventh Meeting: Acquisitions

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The theme for this NASA Scientific and Technical Information Program Coordinating Council meeting was Acquisitions. In addition to NASA and the NASA Center for AeroSpace Information (CASI) presentations, the report contains fairly lengthy visuals about acquisitions at the Defense Technical Information Center. CASI's acquisitions program and CASI's proactive acquisitions activity were described. There was a presentation on the document evaluation process at CASI. A talk about open literature scope and coverage at the American Institute of Aeronautics and Astronautics was also given. An overview of the STI Program's Acquisitions Experts Committee was given next. Finally acquisitions initiatives of the NASA STI program were presented.

  6. Partial Arc Curvilinear Direct Drive Servomotor

    NASA Technical Reports Server (NTRS)

    Sun, Xiuhong (Inventor)

    2014-01-01

    A partial arc servomotor assembly having a curvilinear U-channel with two parallel rare earth permanent magnet plates facing each other and a pivoted ironless three phase coil armature winding moves between the plates. An encoder read head is fixed to a mounting plate above the coil armature winding and a curvilinear encoder scale is curved to be co-axis with the curvilinear U-channel permanent magnet track formed by the permanent magnet plates. Driven by a set of miniaturized power electronics devices closely looped with a positioning feedback encoder, the angular position and velocity of the pivoted payload is programmable and precisely controlled.

  7. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  8. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  9. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  10. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174