Science.gov

Sample records for partially parallel acquisitions

  1. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  2. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system. PMID:26669509

  3. Computational acceleration for MR image reconstruction in partially parallel imaging.

    PubMed

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images. PMID:20833599

  4. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  5. The Force Singularity for Partially Immersed Parallel Plates

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Rajat; Finn, Robert

    2016-05-01

    In earlier work, we provided a general description of the forces of attraction and repulsion, encountered by two parallel vertical plates of infinite extent and of possibly differing materials, when partially immersed in an infinite liquid bath and subject to surface tension forces. In the present study, we examine some unusual details of the exotic behavior that can occur at the singular configuration separating infinite rise from infinite descent of the fluid between the plates, as the plates approach each other. In connection with this singular behavior, we present also some new estimates on meniscus height details.

  6. Solution of partial differential equations on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1985-01-01

    The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed.

  7. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages.

    PubMed

    Fautz, H P; Honal, M; Saueressig, U; Schäfer, O; Kannengiesser, S A R

    2007-01-01

    Two-dimensional (2D) axial continuously-moving-table imaging has to deal with artifacts due to gradient nonlinearity and breathing motion, and has to provide the highest scan efficiency. Parallel imaging techniques (e.g., generalized autocalibrating partially parallel acquisition GRAPPA)) are used to reduce such artifacts and avoid ghosting artifacts. The latter occur in T(2)-weighted multi-spin-echo (SE) acquisitions that omit an additional excitation prior to imaging scans for presaturation purposes. Multiple images are reconstructed from subdivisions of a fully sampled k-space data set, each of which is acquired in a single SE train. These images are then averaged. GRAPPA coil weights are estimated without additional measurements. Compared to conventional image reconstruction, inconsistencies between different subsets of k-space induce less artifacts when each k-space part is reconstructed separately and the multiple images are averaged afterwards. These inconsistencies may lead to inaccurate GRAPPA coil weights using the proposed intrinsic GRAPPA calibration. It is shown that aliasing artifacts in single images are canceled out after averaging. Phantom and in vivo studies demonstrate the benefit of the proposed reconstruction scheme for free-breathing axial continuously-moving-table imaging using fast multi-SE sequences.

  8. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages.

    PubMed

    Fautz, H P; Honal, M; Saueressig, U; Schäfer, O; Kannengiesser, S A R

    2007-01-01

    Two-dimensional (2D) axial continuously-moving-table imaging has to deal with artifacts due to gradient nonlinearity and breathing motion, and has to provide the highest scan efficiency. Parallel imaging techniques (e.g., generalized autocalibrating partially parallel acquisition GRAPPA)) are used to reduce such artifacts and avoid ghosting artifacts. The latter occur in T(2)-weighted multi-spin-echo (SE) acquisitions that omit an additional excitation prior to imaging scans for presaturation purposes. Multiple images are reconstructed from subdivisions of a fully sampled k-space data set, each of which is acquired in a single SE train. These images are then averaged. GRAPPA coil weights are estimated without additional measurements. Compared to conventional image reconstruction, inconsistencies between different subsets of k-space induce less artifacts when each k-space part is reconstructed separately and the multiple images are averaged afterwards. These inconsistencies may lead to inaccurate GRAPPA coil weights using the proposed intrinsic GRAPPA calibration. It is shown that aliasing artifacts in single images are canceled out after averaging. Phantom and in vivo studies demonstrate the benefit of the proposed reconstruction scheme for free-breathing axial continuously-moving-table imaging using fast multi-SE sequences. PMID:17191244

  9. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  10. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  11. Modeling non-stationarity of kernel weights for k-space reconstruction in partially parallel imaging

    PubMed Central

    Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Huo, Donglai; Wilson, David L.

    2011-01-01

    Purpose: In partially parallel imaging, most k-space-based reconstruction algorithms such as GRAPPA adopt a single finite-size kernel to approximate the true relationship between sampled and nonsampled signals. However, the estimation of this kernel based on k-space signals is imperfect, and the authors are investigating methods dealing with local variation of k-space signals. Methods: To model nonstationarity of kernel weights, similar to performing a spatially adaptive regularization, the authors fit a set of linear functions using concepts from geographically weighted regression, a methodology used in geophysical analysis. Instead of a reconstruction with a single set of kernel weights, the authors use multiple sets. A missing signal is reconstructed with its kernel weights set determined by k-space clustering. Simulated and acquired MR data with several different image content and acquisition schemes, including MR tagging, were tested. A perceptual difference model (Case-PDM) was used to quantitatively evaluate the quality of over 1000 test images, and to optimize the parameters of our algorithm. Results: A MOdeling Non-stationarity of KErnel wEightS (“MONKEES”) reconstruction with two sets of kernel weights gave reconstructions with significantly better image quality than the original GRAPPA in all test images. Using more sets produced improved image quality but with diminishing returns. As a rule of thumb, at least two sets of kernel weights, one from low- and the other from high frequency k-space, should be used. Conclusions: The authors conclude that the MONKEES can significantly and robustly improve the image quality in parallel MR imaging, particularly, cardiac imaging. PMID:21928649

  12. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking

    SciTech Connect

    Welch, L.C.

    1984-01-01

    This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered.

  13. Combined Parallel and Partial Fourier MR Reconstruction for Accelerated 8-Channel Hyperpolarized Carbon-13 In Vivo Magnetic Resonance Spectroscopic Imaging (MRSI)

    PubMed Central

    Ohliger, Michael A.; Larson, Peder E.Z.; Bok, Robert A.; Shin, Peter; Hu, Simon; Tropp, James; Robb, Fraser; Carvajal, Lucas; Nelson, Sarah J.; Kurhanewicz, John; Vigneron, Daniel B.

    2013-01-01

    Purpose To implement and evaluate combined parallel magnetic resonance imaging (MRI) and partial Fourier acquisition and reconstruction for rapid hyperpolarized carbon-13 (13C) spectroscopic imaging. Short acquisition times mitigate hyperpolarized signal losses that occur due to T1 decay, metabolism, and radiofrequency (RF) saturation. Human applications additionally require rapid imaging to permit breath-holding and to minimize the effects of physiologic motion. Materials and Methods Numerical simulations were employed to validate and characterize the reconstruction. In vivo MR spectroscopic images were obtained from a rat following injection of hyperpolarized 13C pyruvate using an 8-channel array of carbon-tuned receive elements. Results For small spectroscopic matrix sizes, combined parallel imaging and partial Fourier undersampling resulted primarily in decreased spatial resolution, with relatively less visible spatial aliasing. Parallel reconstruction qualitatively restored lost image detail, although some pixel spectra had persistent numerical error. With this technique, a 30 × 10 × 16 matrix of 4800 3D MR spectroscopy imaging voxels from a whole rat with isotropic 8 mm3 resolution was acquired within 11 seconds. Conclusion Parallel MRI and partial Fourier acquisitions can provide the shorter imaging times and wider spatial coverage that will be necessary as hyperpolarized 13C techniques move toward human clinical applications. PMID:23293097

  14. Performance of a VME-based parallel processing LIDAR data acquisition system (summary)

    SciTech Connect

    Moore, K.; Buttler, B.; Caffrey, M.; Soriano, C.

    1995-05-01

    It may be possible to make accurate real time, autonomous, 2 and 3 dimensional wind measurements remotely with an elastic backscatter Light Detection and Ranging (LIDAR) system by incorporating digital parallel processing hardware into the data acquisition system. In this paper, we report the performance of a commercially available digital parallel processing system in implementing the maximum correlation technique for wind sensing using actual LIDAR data. Timing and numerical accuracy are benchmarked against a standard microprocessor impementation.

  15. A parallel performance study of the Cartesian method for partial differential equations on a sphere

    SciTech Connect

    Drake, J.B.; Coddington, M.P.

    1997-04-01

    A 3-D Cartesian method for integration of partial differential equations on a spherical surface is developed for parallel computation. The target computer architectures are distributed memory, message passing computers such as the Intel Paragon. The parallel algorithms are described along with mesh partitioning strategies. Performance of the algorithms is considered for a standard test case of the shallow water equations on the sphere. The authors find the computation time scale well with increasing numbers of processors.

  16. Modeling Parallelization and Flexibility Improvements in Skill Acquisition: From Dual Tasks to Complex Dynamic Skills

    ERIC Educational Resources Information Center

    Taatgen, Niels

    2005-01-01

    Emerging parallel processing and increased flexibility during the acquisition of cognitive skills form a combination that is hard to reconcile with rule-based models that often produce brittle behavior. Rule-based models can exhibit these properties by adhering to 2 principles: that the model gradually learns task-specific rules from instructions…

  17. Note on parallel processing techniques for algebraic equations, ordinary differential equations and partial differential equations

    SciTech Connect

    Allidina, A.Y.; Malinowski, K.; Singh, M.G.

    1982-12-01

    The possibilities were explored for enhancing parallelism in the simulation of systems described by algebraic equations, ordinary differential equations and partial differential equations. These techniques, using multiprocessors, were developed to speed up simulations, e.g. for nuclear accidents. Issues involved in their design included suitable approximations to bring the problem into a numerically manageable form and a numerical procedure to perform the computations necessary to solve the problem accurately. Parallel processing techniques used as simulation procedures, and a design of a simulation scheme and simulation procedure employing parallel computer facilities, were both considered.

  18. Effect of continuous and partial reinforcement on the acquisition and extinction of human conditioned fear.

    PubMed

    Grady, Ashley K; Bowen, Kenton H; Hyde, Andrew T; Totsch, Stacie K; Knight, David C

    2016-02-01

    Extinction of Pavlovian conditioned fear in humans is a popular paradigm often used to study learning and memory processes that mediate anxiety-related disorders. Fear extinction studies often only pair the conditioned stimulus (CS) and unconditioned stimulus (UCS) on a subset of acquisition trials (i.e., partial reinforcement/pairing) to prolong extinction (i.e., partial reinforcement extinction effect; PREE) and provide more time to study the process. However, there is limited evidence that the partial pairing procedures typically used during fear conditioning actually extend the extinction process, while there is strong evidence these procedures weaken conditioned response (CR) acquisition. Therefore, determining conditioning procedures that support strong CR acquisition and that also prolong the extinction process would benefit the field. The present study investigated 4 separate CS-UCS pairing procedures to determine methods that support strong conditioning and that also exhibit a PREE. One group (C-C) of participants received continuous CS-UCS pairings; a second group (C-P) received continuous followed by partial CS-UCS pairings; a third group (P-C) received partial followed by continuous CS-UCS pairings; and a fourth group (P-P) received partial CS-UCS pairings during acquisition. A strong skin conductance CR was expressed by C-C and P-C groups but not by C-P and P-P groups at the end of the acquisition phase. The P-C group maintained the CR during extinction. In contrast, the CR extinguished quickly within the C-C group. These findings suggest that partial followed by continuous CS-UCS pairings elicit strong CRs and prolong the extinction process following human fear conditioning. PMID:26692449

  19. Effect of continuous and partial reinforcement on the acquisition and extinction of human conditioned fear.

    PubMed

    Grady, Ashley K; Bowen, Kenton H; Hyde, Andrew T; Totsch, Stacie K; Knight, David C

    2016-02-01

    Extinction of Pavlovian conditioned fear in humans is a popular paradigm often used to study learning and memory processes that mediate anxiety-related disorders. Fear extinction studies often only pair the conditioned stimulus (CS) and unconditioned stimulus (UCS) on a subset of acquisition trials (i.e., partial reinforcement/pairing) to prolong extinction (i.e., partial reinforcement extinction effect; PREE) and provide more time to study the process. However, there is limited evidence that the partial pairing procedures typically used during fear conditioning actually extend the extinction process, while there is strong evidence these procedures weaken conditioned response (CR) acquisition. Therefore, determining conditioning procedures that support strong CR acquisition and that also prolong the extinction process would benefit the field. The present study investigated 4 separate CS-UCS pairing procedures to determine methods that support strong conditioning and that also exhibit a PREE. One group (C-C) of participants received continuous CS-UCS pairings; a second group (C-P) received continuous followed by partial CS-UCS pairings; a third group (P-C) received partial followed by continuous CS-UCS pairings; and a fourth group (P-P) received partial CS-UCS pairings during acquisition. A strong skin conductance CR was expressed by C-C and P-C groups but not by C-P and P-P groups at the end of the acquisition phase. The P-C group maintained the CR during extinction. In contrast, the CR extinguished quickly within the C-C group. These findings suggest that partial followed by continuous CS-UCS pairings elicit strong CRs and prolong the extinction process following human fear conditioning.

  20. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  1. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  2. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  3. Parent-Implemented Mand Training: Acquisition of Framed Manding in a Young Boy with Partial Hemispherectomy

    ERIC Educational Resources Information Center

    Ingvarsson, Einar T.

    2011-01-01

    This study examined the effects of parent-implemented mand training on the acquisition of framed manding in a 4-year-old boy who had undergone partial hemispherectomy. Framed manding became the predominant mand form when and only when the intervention was implemented with each preferred toy, but minimal generalization to untrained toys …

  4. Multislice perfusion of the kidneys using parallel imaging: image acquisition and analysis strategies.

    PubMed

    Gardener, Alexander G; Francis, Susan T

    2010-06-01

    Flow-sensitive alternating inversion recovery arterial spin labeling with parallel imaging acquisition is used to acquire single-shot, multislice perfusion maps of the kidney. A considerable problem for arterial spin labeling methods, which are based on sequential subtraction, is the movement of the kidneys due to respiratory motion between acquisitions. The effects of breathing strategy (free, respiratory-triggered and breath hold) are studied and the use of background suppression is investigated. The application of movement correction by image registration is assessed and perfusion rates are measured. Postacquisition image realignment is shown to improve visual quality and subsequent perfusion quantification. Using such correction, data can be collected from free breathing alone, without the need for a good respiratory trace and in the shortest overall acquisition time, advantageous for patient comfort. The addition of background suppression to arterial spin labeling data is shown to reduce the perfusion signal-to-noise ratio and underestimate perfusion.

  5. Fast parallel algorithms and enumeration techniques for partial k-trees

    SciTech Connect

    Narayanan, C.

    1989-01-01

    Recent research by several authors have resulted in systematic way of developing linear-time sequential algorithms for a host of problem: on a fairly general class of graphs variously known as bounded decomposable graphs, graphs of bounded treewidth, partial k-trees, etc. Partial k-trees arise in a variety of real-life applications such as network reliability, VLSI design and database systems and hence fast sequential algorithms on these graphs have been found to be desirable. The linear-time methodologies were independently developed by Bern, Lawler, and Wong ((10)), Arnborg and Proskurowski ((6)), Bodlaender ((14)), and Courcelle ((25)). Wimer ((89)) significantly extended the work of Bern, Lawler and Wong. All of these approaches share the common thread of using dynamic programming on a tree structure. In particular the methodology of Wimer uses a parse-tree as the data structure. The methodologies claim linear-time algorithms on partial k-trees for fixed k, for a number of combinatorial optimization problems given the tree structure as input. It is known that obtaining the tree structure is NP-hard. This dissertation investigates three important classes of problems: (1) Developing parallel algorithms for constructing a k-tree embedding, finding a tree decomposition and most notably obtaining a parse-tree for a partial k-tree. (2) Developing parallel algorithms for parse-tree computations, testing isomorphism of k-trees, and finding a 2-tree embedding of a cactus. (3) Obtaining techniques for counting vertex/edge subsets satisfying a certain property in some classes of partial k-trees. The parallel algorithms the author has developed are in class NC and are either new or improve upon the existing results of Bodlaender (13). The difference equations he has obtained for counting certain sub-graphs are not known in the literature so far.

  6. Parent-implemented mand training: acquisition of framed manding in a young boy with partial hemispherectomy.

    PubMed

    Ingvarsson, Einar T

    2011-01-01

    This study examined the effects of parent-implemented mand training on the acquisition of framed manding in a 4-year-old boy who had undergone partial hemispherectomy. Framed manding became the predominant mand form when and only when the intervention was implemented with each preferred toy, but minimal generalization to untrained toys nevertheless occurred. A pure mand test suggested that manding was controlled by the relevant motivating operation. PMID:21541111

  7. Contrast-enhanced MR angiography utilizing parallel acquisition techniques in renal artery stenosis detection.

    PubMed

    Slanina, Martin; Zizka, Jan; Klzo, Ludovít; Lojík, Miroslav

    2010-07-01

    Significant renal artery stenosis (RAS) is a potentially curable cause of renovascular hypertension and/or renal impairment. It is caused by either atherosclerosis or fibromuscular dysplasia. Correct and timely diagnosis remains a diagnostic challenge. MR angiography (MRA) as a minimally invasive method seems to be suitable for RAS detection, however, its diagnostic value widely differs in the literature (sensitivity 62-100% and specificity 75-100%). The aim of our prospective study was to compare the diagnostic value of contrast-enhanced MRA utilizing parallel acquisition techniques in the detection of significant RAS with digital subtraction angiography (DSA). A total of 78 hypertensive subjects with suspected renal artery stenosis were examined on a 1.5 Tesla MR system using a body array coil. Bolus tracking was used to monitor the arrival of contrast agent to the abdominal aorta. The MRA sequence parameters were as follows: TR 3.7 ms; TE 1.2 ms; flip angle 25 degrees; acquisition time 18s; voxel size 1.1 mm x1.0 mm x 1.1 mm; centric k-space sampling; parallel acquisition technique with acceleration factor of 2 (GRAPPA). Renal artery stenosis of 60% and more was considered hemodynamically significant. The results of MRA were compared to digital subtraction angiography serving as a standard of reference. Sensitivity and specificity of MRA in the detection of hemodynamically significant renal artery stenosis were 90% and 96%, respectively. Prevalence of RAS was 39% in our study population. Contrast-enhanced MRA with high spatial resolution offers sufficient sensitivity and specificity for screening of RAS. PMID:19671492

  8. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2010-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by studying natural speech acquisition, and it provides a means of probing the boundaries and constraints that general auditory perception and cognition bring to the task of speech category learning. In this study, we used a multimodal, video-game-based implicit learning paradigm to train participants to categorize acoustically complex, nonlinguistic sounds. Mismatch negativity responses to the nonspeech stimuli were collected before and after training to investigate the degree to which neural changes supporting the learning of these nonspeech categories parallel those typically observed for speech category acquisition. Results indicate that changes in mismatch negativity resulting from the nonspeech category learning closely resemble patterns of change typically observed during speech category learning. This suggests that the often-observed “specialized” neural responses to speech sounds may result, at least in part, from the expertise we develop with speech categories through experience rathr than from properties unique to speech (e.g., linguistic or vocal tract gestural information). Furthermore, particular characteristics of the training paradigm may inform our understanding of mechanisms that support natural speech acquisition. PMID:19929331

  9. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    NASA Astrophysics Data System (ADS)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  10. The design and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    SciTech Connect

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1987-05-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these systems, taking advantages of the independent nature of ''events,'' is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, CPU power equivalent to several VAXs is obtained at a fraction of the cost of one VAX. The front end interfacs to a VAX 11/750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to reply data using the same command language. Design concepts, data structures, performance, and experience to data are discussed.

  11. The design, creation, and performance of the parallel multiprocessor nuclear physics data acquisition system, DAPHNE

    SciTech Connect

    Welch, L.C.; Moog, T.H.; Daly, R.T.; Videbaek, F.

    1986-01-01

    The ever increasing complexity of nuclear physics experiments places severe demands on computerized data acquisition systems. A natural evolution of these system, taking advantage of the independent nature of ''events'', is to use identical parallel microcomputers in a front end to simultaneously analyze separate events. Such a system has been developed at Argonne to serve the needs of the experimental program of ATLAS, a new superconducting heavy-ion accelerator and other on-going research. Using microcomputers based on the National Semiconductor 32016 microprocessor housed in a Multibus I cage, multi-VAX cpu power is obtained at a fraction of the cost of one VAX. The front end interfaces to a VAX 750 on which an extensive user friendly command language based on DCL resides. The whole system, known as DAPHNE, also provides the means to replay data using the same command language. Design concepts, data structures, performance, and experience to data are discussed. 5 refs., 2 figs.

  12. HASTE sequence with parallel acquisition and T2 decay compensation: application to carotid artery imaging.

    PubMed

    Zhang, Ling; Kholmovski, Eugene G; Guo, Junyu; Choi, Seong-Eun Kim; Morrell, Glen R; Parker, Dennis L

    2009-01-01

    T2-weighted carotid artery images acquired using the turbo spin-echo (TSE) sequence frequently suffer from motion artifacts due to respiration and blood pulsation. The possibility of using HASTE sequence to achieve motion-free carotid images was investigated. The HASTE sequence suffers from severe blurring artifacts due to signal loss in later echoes due to T2 decay. Combining HASTE with parallel acquisition (PHASTE) decreases the number of echoes acquired and thus effectively reduces the blurring artifact caused by T2 relaxation. Further improvement in image sharpness can be achieved by performing T2 decay compensation before reconstructing the PHASTE data. Preliminary results have shown successful suppression of motion artifacts with PHASTE imaging. The image quality was enhanced relative to the original HASTE image, but was still less sharp than a non-motion-corrupted TSE image.

  13. Experience with the parallel solution of partial differential equations on a distributed computed system

    SciTech Connect

    Gelenbe, E.; Lichnewsky, A.; Staphylopatis, A.

    1982-12-01

    It is of interest to determine whether loosely coupled multiprocessors can be profitably used for the solution of larger numerical problems. The authors present a performance evaluation of the gain obtained by solving partial differential equation systems on such an architecture. The experimental setting is an LSI 11 based multiprocessor system using a fiber optics local area network designed and implemented at Laboratoire de Recherche en Informatique, Universite Paris-Sud. The paper includes a discussion of the numerical methods and of their implementation, a performance model of the parallel processing system, and measurements taken on the experimental system. The experimentally validated theoretical results confirm the interest of the authors approach based on performance models. 11 references.

  14. Parallelizing across time when solving time-dependent partial differential equations

    SciTech Connect

    Worley, P.H.

    1991-09-01

    The standard numerical algorithms for solving time-dependent partial differential equations (PDEs) are inherently sequential in the time direction. This paper describes algorithms for the time-accurate solution of certain classes of linear hyperbolic and parabolic PDEs that can be parallelized in both time and space and have serial complexities that are proportional to the serial complexities of the best known algorithms. The algorithms for parabolic PDEs are variants of the waveform relaxation multigrid method (WFMG) of Lubich and Ostermann where the scalar ordinary differential equations (ODEs) that make up the kernel of WFMG are solved using a cyclic reduction type algorithm. The algorithms for hyperbolic PDEs use the cyclic reduction algorithm to solve ODEs along characteristics. 43 refs.

  15. Parallels between control PDE's (Partial Differential Equations) and systems of ODE's (Ordinary Differential Equations)

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Villarreal, Ramiro

    1987-01-01

    System theorists understand that the same mathematical objects which determine controllability for nonlinear control systems of ordinary differential equations (ODEs) also determine hypoellipticity for linear partial differentail equations (PDEs). Moreover, almost any study of ODE systems begins with linear systems. It is remarkable that Hormander's paper on hypoellipticity of second order linear p.d.e.'s starts with equations due to Kolmogorov, which are shown to be analogous to the linear PDEs. Eigenvalue placement by state feedback for a controllable linear system can be paralleled for a Kolmogorov equation if an appropriate type of feedback is introduced. Results concerning transformations of nonlinear systems to linear systems are similar to results for transforming a linear PDE to a Kolmogorov equation.

  16. Improved Sensitivity of Spin Echo and Parallel Acquisitions Using SENSE Compared to Gradient Echo Sequences in fMRI

    NASA Astrophysics Data System (ADS)

    El Mrini, Sanaa; Hamri, Mohammed

    2012-03-01

    This work aims to validate the performance of spin echo and parallel acquisition using "SENSitivity Encoding (SENSE)" by comparing it to different imaging techniques, including gradient echo-planar imaging using parallel acquisition and SENSE usually used and gradient echo sequences and that of the echo of echo-planar spin. It compares the performance of sequences and their sensitivity to motor activity reflected by activation within the motor part of the brain. Image analysis of volunteers were processed individually. Image analysis techniques, such as normalization and smoothing, were used. Analyses were carried out using `Statistical Parametric Mapping' operating under Matlab.

  17. L2 and Deaf Learners' Knowledge of Numerically Quantified English Sentences: Acquisitional Parallels at the Semantics/Discourse-Pragmatics Interface

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Schueler-Choukairi, Tanya

    2012-01-01

    This study assessed knowledge of numerically quantified English sentences in two learner populations--second language (L2) learners and deaf learners--whose acquisition of English occurs under conditions of restricted access to the target language input. Under the experimental test conditions, interlanguage parallels were predicted to arise from…

  18. Magnetic flux density reconstruction using interleaved partial Fourier acquisitions in MREIT.

    PubMed

    Park, Hee Myung; Nam, Hyun Soo; Kwon, Oh In

    2011-04-01

    Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive modality to visualize the internal conductivity and/or current density of an electrically conductive object by the injection of current. In order to measure a magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels the systematic artifacts accumulated in phase signals and also reduces the random noise effect. However, it is important to reduce scan duration maintaining spatial resolution and sufficient contrast, in order to allow for practical in vivo implementation of MREIT. The purpose of this paper is to develop a coupled partial Fourier strategy in the interleaved sampling in order to reduce the total imaging time for an MREIT acquisition, whilst maintaining an SNR of the measured magnetic flux density comparable to what is achieved with complete k-space data. The proposed method uses two key steps: one is to update the magnetic flux density by updating the complex densities using the partially interleaved k-space data and the other is to fill in the missing k-space data iteratively using the updated background field inhomogeneity and magnetic flux density data. Results from numerical simulations and animal experiments demonstrate that the proposed method reduces considerably the scanning time and provides resolution of the recovered B(z) comparable to what is obtained from complete k-space data.

  19. Cascade connection serial parallel hybrid acquisition synchronization method for DS-FHSS in air-ground data link

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Zhou, Desuo

    2007-11-01

    In air-ground tactical data link system, a kind of primary anti jamming technology adopted is direct sequence - frequency hopping spread spectrum (DS-FHSS) technology. However, how to implement the quick synchronization of DS-FHSS is an important technology problem, which could influence the whole communication capability of system. Thinking of the application demand of anti jamming technology in actual, a kind of cascade connection serial parallel hybrid acquisition synchronization method is given for the DS-FHSS system. The synchronization consists of two stages. The synchronization of FH communication is adopted at the first stage, and the serial parallel hybrid structure is adopted for the DS PN code synchronization at the secondary stage. Through calculating the detect probability of the FH synchronization acquisition and the acquisition time of DS code chip synchronization, the contribution to the synchronization capability of system by this method in this paper is analyzed. Finally, through simulating on computer, the performance estimate about this cascade connection serial parallel hybrid acquisition synchronization method is given.

  20. High-performance partially aligned semiconductive single-walled carbon nanotube transistors achieved with a parallel technique.

    PubMed

    Wang, Yilei; Pillai, Suresh Kumar Raman; Chan-Park, Mary B

    2013-09-01

    Single-walled carbon nanotubes (SWNTs) are widely thought to be a strong contender for next-generation printed electronic transistor materials. However, large-scale solution-based parallel assembly of SWNTs to obtain high-performance transistor devices is challenging. SWNTs have anisotropic properties and, although partial alignment of the nanotubes has been theoretically predicted to achieve optimum transistor device performance, thus far no parallel solution-based technique can achieve this. Herein a novel solution-based technique, the immersion-cum-shake method, is reported to achieve partially aligned SWNT networks using semiconductive (99% enriched) SWNTs (s-SWNTs). By immersing an aminosilane-treated wafer into a solution of nanotubes placed on a rotary shaker, the repetitive flow of the nanotube solution over the wafer surface during the deposition process orients the nanotubes toward the fluid flow direction. By adjusting the nanotube concentration in the solution, the nanotube density of the partially aligned network can be controlled; linear densities ranging from 5 to 45 SWNTs/μm are observed. Through control of the linear SWNT density and channel length, the optimum SWNT-based field-effect transistor devices achieve outstanding performance metrics (with an on/off ratio of ~3.2 × 10(4) and mobility 46.5 cm(2) /Vs). Atomic force microscopy shows that the partial alignment is uniform over an area of 20 × 20 mm(2) and confirms that the orientation of the nanotubes is mostly along the fluid flow direction, with a narrow orientation scatter characterized by a full width at half maximum (FWHM) of <15° for all but the densest film, which is 35°. This parallel process is large-scale applicable and exploits the anisotropic properties of the SWNTs, presenting a viable path forward for industrial adoption of SWNTs in printed, flexible, and large-area electronics.

  1. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  2. Parallel Bimodal Bilingual Acquisition: A Hearing Child Mediated in a Deaf Family

    ERIC Educational Resources Information Center

    Cramér-Wolrath, Emelie

    2013-01-01

    The aim of this longitudinal case study was to describe bimodal and bilingual acquisition in a hearing child, Hugo, especially the role his Deaf family played in his linguistic education. Video observations of the family interactions were conducted from the time Hugo was 10 months of age until he was 40 months old. The family language was Swedish…

  3. The Effects of Partial Reinforcement in the Acquisition and Extinction of Recurrent Serial Patterns.

    ERIC Educational Resources Information Center

    Dockstader, Steven L.

    The purpose of these 2 experiments was to determine whether sequential response pattern behavior is affected by partial reinforcement in the same way as other behavior systems. The first experiment investigated the partial reinforcement extinction effects (PREE) in a sequential concept learning task where subjects were required to learn a…

  4. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments.

    PubMed

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  5. Partial dopaminergic denervation-induced impairment in stimulus discrimination acquisition in parkinsonian rats: a model for early Parkinson's disease.

    PubMed

    Eagle, Andrew L; Olumolade, Oluyemi O; Otani, Hajime

    2015-03-01

    Parkinson's disease (PD) produces progressive nigrostriatal dopamine (DA) denervation resulting in cognitive and motor impairment. However, it is unknown whether cognitive impairments, such as instrumental learning deficits, are associated with the early stage PD-induced mild DA denervation. The current study sought to model early PD-induced instrumental learning impairments by assessing the effects of low dose (5.5μg), bilateral 6OHDA-induced striatal DA denervation on acquisition of instrumental stimulus discrimination in rats. 6OHDA (n=20) or sham (n=10) lesioned rats were tested for stimulus discrimination acquisition either 1 or 2 weeks post surgical lesion. Stimulus discrimination acquisition across 10 daily sessions was used to assess discriminative accuracy, or a probability measure of the shift toward reinforced responding under one stimulus condition (Sd) away from extinction, when reinforcement was withheld, under another (S(d) phase). Striatal DA denervation was assayed by tyrosine hydroxylase (TH) staining intensity. Results indicated that 6OHDA lesions produced significant loss of dorsal striatal TH staining intensity and marked impairment in discrimination acquisition, without inducing akinetic motor deficits. Rather 6OHDA-induced impairment was associated with perseveration during extinction (S(Δ) phase). These findings suggest that partial, bilateral striatal DA denervation produces instrumental learning deficits, prior to the onset of gross motor impairment, and suggest that the current model is useful for investigating mild nigrostriatal DA denervation associated with early stage clinical PD.

  6. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  7. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  9. Single-shot magnetic resonance spectroscopic imaging with partial parallel imaging.

    PubMed

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2009-03-01

    A magnetic resonance spectroscopic imaging (MRSI) pulse sequence based on proton-echo-planar-spectroscopic-imaging (PEPSI) is introduced that measures two-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3-T whole-body scanner equipped with a 12-channel array coil. Four-step interleaved phase encoding and fourfold SENSE acceleration were used to encode a 16 x 16 spatial matrix with a 390-Hz spectral width. Comparison with conventional PEPSI and PEPSI with fourfold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor-related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of inositol, choline, creatine, and N-acetyl-aspartate (NAA) in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement.

  10. Single-shot magnetic resonance spectroscopic imaging with partial parallel imaging.

    PubMed

    Posse, Stefan; Otazo, Ricardo; Tsai, Shang-Yueh; Yoshimoto, Akio Ernesto; Lin, Fa-Hsuan

    2009-03-01

    A magnetic resonance spectroscopic imaging (MRSI) pulse sequence based on proton-echo-planar-spectroscopic-imaging (PEPSI) is introduced that measures two-dimensional metabolite maps in a single excitation. Echo-planar spatial-spectral encoding was combined with interleaved phase encoding and parallel imaging using SENSE to reconstruct absorption mode spectra. The symmetrical k-space trajectory compensates phase errors due to convolution of spatial and spectral encoding. Single-shot MRSI at short TE was evaluated in phantoms and in vivo on a 3-T whole-body scanner equipped with a 12-channel array coil. Four-step interleaved phase encoding and fourfold SENSE acceleration were used to encode a 16 x 16 spatial matrix with a 390-Hz spectral width. Comparison with conventional PEPSI and PEPSI with fourfold SENSE acceleration demonstrated comparable sensitivity per unit time when taking into account g-factor-related noise increases and differences in sampling efficiency. LCModel fitting enabled quantification of inositol, choline, creatine, and N-acetyl-aspartate (NAA) in vivo with concentration values in the ranges measured with conventional PEPSI and SENSE-accelerated PEPSI. Cramer-Rao lower bounds were comparable to those obtained with conventional SENSE-accelerated PEPSI at the same voxel size and measurement time. This single-shot MRSI method is therefore suitable for applications that require high temporal resolution to monitor temporal dynamics or to reduce sensitivity to tissue movement. PMID:19097245

  11. Avoidance prone individuals self reporting behavioral inhibition exhibit facilitated acquisition and altered extinction of conditioned eyeblinks with partial reinforcement schedules.

    PubMed

    Allen, Michael Todd; Myers, Catherine E; Servatius, Richard J

    2014-01-01

    Avoidance in the face of novel situations or uncertainty is a prime feature of behavioral inhibition which has been put forth as a risk factor for the development of anxiety disorders. Recent work has found that behaviorally inhibited (BI) individuals acquire conditioned eyeblinks faster than non-inhibited (NI) individuals in omission and yoked paradigms in which the predictive relationship between the conditioned stimulus (CS) and unconditional stimulus (US) is less than optimal as compared to standard training with CS-US paired trials (Holloway et al., 2014). In the current study, we tested explicitly partial schedules in which half the trials were CS alone or US alone trials in addition to the standard CS-US paired trials. One hundred and forty nine college-aged undergraduates participated in the study. All participants completed the Adult Measure of Behavioral Inhibition (i.e., AMBI) which was used to group participants as BI and NI. Eyeblink conditioning consisted of three US alone trials, 60 acquisition trials, and 20 CS-alone extinction trials presented in one session. Conditioning stimuli were a 500 ms tone CS and a 50-ms air puff US. Behaviorally inhibited individuals receiving 50% partial reinforcement with CS alone or US alone trials produced facilitated acquisition as compared to NI individuals. A partial reinforcement extinction effect (PREE) was evident with CS alone trials in BI but not NI individuals. These current findings indicate that avoidance prone individuals self-reporting behavioral inhibition over-learn an association and are slow to extinguish conditioned responses (CRs) when there is some level of uncertainty between paired trials and CS or US alone presentations.

  12. Avoidance prone individuals self reporting behavioral inhibition exhibit facilitated acquisition and altered extinction of conditioned eyeblinks with partial reinforcement schedules

    PubMed Central

    Allen, Michael Todd; Myers, Catherine E.; Servatius, Richard J.

    2014-01-01

    Avoidance in the face of novel situations or uncertainty is a prime feature of behavioral inhibition which has been put forth as a risk factor for the development of anxiety disorders. Recent work has found that behaviorally inhibited (BI) individuals acquire conditioned eyeblinks faster than non-inhibited (NI) individuals in omission and yoked paradigms in which the predictive relationship between the conditioned stimulus (CS) and unconditional stimulus (US) is less than optimal as compared to standard training with CS-US paired trials (Holloway et al., 2014). In the current study, we tested explicitly partial schedules in which half the trials were CS alone or US alone trials in addition to the standard CS-US paired trials. One hundred and forty nine college-aged undergraduates participated in the study. All participants completed the Adult Measure of Behavioral Inhibition (i.e., AMBI) which was used to group participants as BI and NI. Eyeblink conditioning consisted of three US alone trials, 60 acquisition trials, and 20 CS-alone extinction trials presented in one session. Conditioning stimuli were a 500 ms tone CS and a 50-ms air puff US. Behaviorally inhibited individuals receiving 50% partial reinforcement with CS alone or US alone trials produced facilitated acquisition as compared to NI individuals. A partial reinforcement extinction effect (PREE) was evident with CS alone trials in BI but not NI individuals. These current findings indicate that avoidance prone individuals self-reporting behavioral inhibition over-learn an association and are slow to extinguish conditioned responses (CRs) when there is some level of uncertainty between paired trials and CS or US alone presentations. PMID:25339877

  13. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  14. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    NASA Astrophysics Data System (ADS)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  15. Parallel acquisition of Raman spectra from a 2D multifocal array using a modulated multifocal detection scheme

    NASA Astrophysics Data System (ADS)

    Kong, Lingbo; Chan, James W.

    2015-03-01

    A major limitation of spontaneous Raman scattering is its intrinsically weak signals, which makes Raman analysis or imaging of biological specimens slow and impractical for many applications. To address this, we report the development of a novel modulated multifocal detection scheme for simultaneous acquisition of full Raman spectra from a 2-D m × n multifocal array. A spatial light modulator (SLM), or a pair of galvo-mirrors, is used to generate m × n laser foci. Raman signals generated within each focus are projected simultaneously into a spectrometer and detected by a CCD camera. The system can resolve the Raman spectra with no crosstalk along the vertical pixels of the CCD camera, e.g., along the entrance slit of the spectrometer. However, there is significant overlap of the spectra in the horizontal pixel direction, e.g., along the dispersion direction. By modulating the excitation multifocal array (illumination modulation) or the emitted Raman signal array (detection modulation), the superimposed Raman spectra of different multifocal patterns are collected. The individual Raman spectrum from each focus is then retrieved from the superimposed spectra using a postacquisition data processing algorithm. This development leads to a significant improvement in the speed of acquiring Raman spectra. We discuss the application of this detection scheme for parallel analysis of individual cells with multifocus laser tweezers Raman spectroscopy (M-LTRS) and for rapid confocal hyperspectral Raman imaging.

  16. Optimization of magnetic flux density for fast MREIT conductivity imaging using multi-echo interleaved partial fourier acquisitions

    PubMed Central

    2013-01-01

    Background Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive method for visualizing the internal conductivity and/or current density of an electrically conductive object by externally injected currents. The injected current through a pair of surface electrodes induces a magnetic flux density distribution inside the imaging object, which results in additional magnetic flux density. To measure the magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels out the systematic artifacts accumulated in phase signals and also reduces the random noise effect by doubling the measured magnetic flux density signal. For practical applications of in vivo MREIT, it is essential to reduce the scan duration maintaining spatial-resolution and sufficient contrast. In this paper, we optimize the magnetic flux density by using a fast gradient multi-echo MR pulse sequence. To recover the one component of magnetic flux density Bz, we use a coupled partial Fourier acquisitions in the interleaved sense. Methods To prove the proposed algorithm, we performed numerical simulations using a two-dimensional finite-element model. For a real experiment, we designed a phantom filled with a calibrated saline solution and located a rubber balloon inside the phantom. The rubber balloon was inflated by injecting the same saline solution during the MREIT imaging. We used the multi-echo fast low angle shot (FLASH) MR pulse sequence for MRI scan, which allows the reduction of measuring time without a substantial loss in image quality. Results Under the assumption of a priori phase artifact map from a reference scan, we rigorously investigated the convergence ratio of the proposed method, which was closely related with the number of measured phase encode set and the frequency range of the background field inhomogeneity. In the phantom experiment with a partial Fourier acquisition, the total scan time was

  17. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC (Superconducting Super Collider) detectors

    SciTech Connect

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C. ); Lockyer, N.; VanBerg, R. )

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab.

  18. Application of Chang's attenuation correction technique for single-photon emission computed tomography partial angle acquisition of Jaszczak phantom.

    PubMed

    Saha, Krishnendu; Hoyt, Sean C; Murray, Bryon M

    2016-01-01

    The acquisition and processing of the Jaszczak phantom is a recommended test by the American College of Radiology for evaluation of gamma camera system performance. To produce the reconstructed phantom image for quality evaluation, attenuation correction is applied. The attenuation of counts originating from the center of the phantom is greater than that originating from the periphery of the phantom causing an artifactual appearance of inhomogeneity in the reconstructed image and complicating phantom evaluation. Chang's mathematical formulation is a common method of attenuation correction applied on most gamma cameras that do not require an external transmission source such as computed tomography, radionuclide sources installed within the gantry of the camera or a flood source. Tomographic acquisition can be obtained in two different acquisition modes for dual-detector gamma camera; one where the two detectors are at 180° configuration and acquire projection images for a full 360°, and the other where the two detectors are positioned at a 90° configuration and acquire projections for only 180°. Though Chang's attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor's camera software producing artifacts in the images. This work investigates whether Chang's attenuation correction technique can be applied to both acquisition modes by the development of a Chang's formulation-based algorithm that is applicable to both modes. Assessment of attenuation correction performance by phantom uniformity analysis illustrates improved uniformity with the proposed algorithm (22.6%) compared to the camera software (57.6%). PMID:27051167

  19. Tetrahydroaminoacridine, a cholinesterase inhibitor, and D-cycloserine, a partial NMDA receptor-associated glycine site agonist, enhances acquisition of spatial navigation.

    PubMed

    Riekkinen, P; Ikonen, S; Riekkinen, M

    1998-05-11

    The present study examines the efficacy of single and combined treatments with an antiocholinesterase, tetrahydroaminoacridine (THA, i.p.), and a glycine-B site partial agonist, D-cycloserine (DCS, i.p.) to alleviate water maze (WM) spatial navigation defect induced by medial septal (MS) lesion. THA 3 and DCS at 3 or 10 mg/kg improved acquisition of the WM test, but only DCS improved spatial bias. These drugs had no effect on consolidation. A combination of THA 3 and DCS 10 mg/kg enhanced WM acquisition more effectively than either of the treatments on their own. This suggests that combined modulation of acetylcholine and NMDA mechanisms may have greater therapeutic effect to stimulate cognitive dysfunctions.

  20. Morphological Awareness in Vocabulary Acquisition among Chinese-Speaking Children: Testing Partial Mediation via Lexical Inference Ability

    ERIC Educational Resources Information Center

    Zhang, Haomin

    2015-01-01

    The goal of this study was to investigate the effect of Chinese-specific morphological awareness on vocabulary acquisition among young Chinese-speaking students. The participants were 288 Chinese-speaking second graders from three different cities in China. Multiple regression analysis and mediation analysis were used to uncover the mediated and…

  1. 16 CFR 802.42 - Partial exemption for acquisitions in connection with the formation of certain joint ventures or...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... connection with the formation of certain joint ventures or other corporations. 802.42 Section 802.42... acquisitions in connection with the formation of certain joint ventures or other corporations. (a) Whenever one or more of the contributors in the formation of a joint venture or other corporation which...

  2. Simultaneous parallel inclined readout image technique.

    PubMed

    Paley, Martyn N J; Lee, Kuan J; Wild, James M; Griffiths, Paul D; Whitby, Elspeth H

    2006-06-01

    Sensitivity-encoded phase undersampling has been combined with simultaneous slice excitation to produce a parallel MRI method with a high volumetric acquisition acceleration factor without the need for auxiliary stepped field coils. Dual-slice excitation was produced by modulating both spin and gradient echo sequences at +/-6 kHz. Frequency aliasing of simultaneously excited slices was prevented by using an additional gradient applied along the slice axis during data acquisition. Data were acquired using a four-channel receiver array and x4 sensitivity encoding on a 1.5 T MR system. The simultaneous parallel inclined readout image technique has been successfully demonstrated in both phantoms and volunteers. A multiplicative image acquisition acceleration factor of up to x8 was achieved. Image SNR and resolution was dependent on the ratio of the readout gradient to the additional slice gradient. A ratio of approximately 2:1 produced acceptable image quality. Use of RF pulses with additional excitation bands should enable the technique to be extended to volumetric acquisition acceleration factors in the range of x16-24 without the SNR limitations of pure partially parallel phase reduction methods.

  3. Solitary Sound Play during Acquisition of English Vocalizations by an African Grey Parrot (Psittacus Erithacus): Possible Parallels with Children's Monologue Speech.

    ERIC Educational Resources Information Center

    Pepperberg, Irene M.; And Others

    1991-01-01

    Examines one component of an African Grey parrot's monologue behavior, private speech, while he was being taught new vocalizations. The data are discussed in terms of the possible functions of monologues during the parrot's acquisition of novel vocalizations. (85 references) (GLR)

  4. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  5. Sequencing the hypervariable regions of human mitochondrial DNA using massively parallel sequencing: Enhanced data acquisition for DNA samples encountered in forensic testing.

    PubMed

    Davis, Carey; Peters, Dixie; Warshauer, David; King, Jonathan; Budowle, Bruce

    2015-03-01

    Mitochondrial DNA testing is a useful tool in the analysis of forensic biological evidence. In cases where nuclear DNA is damaged or limited in quantity, the higher copy number of mitochondrial genomes available in a sample can provide information about the source of a sample. Currently, Sanger-type sequencing (STS) is the primary method to develop mitochondrial DNA profiles. This method is laborious and time consuming. Massively parallel sequencing (MPS) can increase the amount of information obtained from mitochondrial DNA samples while improving turnaround time by decreasing the numbers of manipulations and more so by exploiting high throughput analyses to obtain interpretable results. In this study 18 buccal swabs, three different tissue samples from five individuals, and four bones samples from casework were sequenced at hypervariable regions I and II using STS and MPS. Sample enrichment for STS and MPS was PCR-based. Library preparation for MPS was performed using Nextera® XT DNA Sample Preparation Kit and sequencing was performed on the MiSeq™ (Illumina, Inc.). MPS yielded full concordance of base calls with STS results, and the newer methodology was able to resolve length heteroplasmy in homopolymeric regions. This study demonstrates short amplicon MPS of mitochondrial DNA is feasible, can provide information not possible with STS, and lays the groundwork for development of a whole genome sequencing strategy for degraded samples.

  6. Acquired resistance to zoledronic acid and the parallel acquisition of an aggressive phenotype are mediated by p38-MAP kinase activation in prostate cancer cells

    PubMed Central

    Milone, M R; Pucci, B; Bruzzese, F; Carbone, C; Piro, G; Costantini, S; Capone, F; Leone, A; Di Gennaro, E; Caraglia, M; Budillon, A

    2013-01-01

    resistance, as well as in the acquisition of a more aggressive and invasive phenotype. PMID:23703386

  7. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  8. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  9. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  10. Three-way analysis of the UPLC-PDA dataset for the multicomponent quantitation of hydrochlorothiazide and olmesartan medoxomil in tablets by parallel factor analysis and three-way partial least squares.

    PubMed

    Dinç, Erdal; Ertekin, Zehra Ceren

    2016-01-01

    An application of parallel factor analysis (PARAFAC) and three-way partial least squares (3W-PLS1) regression models to ultra-performance liquid chromatography-photodiode array detection (UPLC-PDA) data with co-eluted peaks in the same wavelength and time regions was described for the multicomponent quantitation of hydrochlorothiazide (HCT) and olmesartan medoxomil (OLM) in tablets. Three-way dataset of HCT and OLM in their binary mixtures containing telmisartan (IS) as an internal standard was recorded with a UPLC-PDA instrument. Firstly, the PARAFAC algorithm was applied for the decomposition of three-way UPLC-PDA data into the chromatographic, spectral and concentration profiles to quantify the concerned compounds. Secondly, 3W-PLS1 approach was subjected to the decomposition of a tensor consisting of three-way UPLC-PDA data into a set of triads to build 3W-PLS1 regression for the analysis of the same compounds in samples. For the proposed three-way analysis methods in the regression and prediction steps, the applicability and validity of PARAFAC and 3W-PLS1 models were checked by analyzing the synthetic mixture samples, inter-day and intra-day samples, and standard addition samples containing HCT and OLM. Two different three-way analysis methods, PARAFAC and 3W-PLS1, were successfully applied to the quantitative estimation of the solid dosage form containing HCT and OLM. Regression and prediction results provided from three-way analysis were compared with those obtained by traditional UPLC method.

  11. HYPERCP data acquisition system

    SciTech Connect

    Kaplan, D.M.; Luebke, W.R.; Chakravorty, A.

    1997-12-31

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data axe acquired from the front-end digitization systems at {approx} 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of {approx} 1 {mu}s per event, allowing operation at 75-kHz trigger rate with {approx_lt}30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates.

  12. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  13. Parallel grid population

    SciTech Connect

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  14. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  15. Parallel imaging for first-pass myocardial perfusion.

    PubMed

    Irwan, Roy; Lubbers, Daniël D; van der Vleuten, Pieter A; Kappert, Peter; Götte, Marco J W; Sijens, Paul E

    2007-06-01

    Two parallel imaging methods used for first-pass myocardial perfusion imaging were compared in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and image artifacts. One used adaptive Time-adaptive SENSitivity Encoding (TSENSE) and the other used GeneRalized Autocalibrating Partially Parallel Acquisition (GRAPPA), which are both applied to a gradient-echo sequence. Both methods were tested on 12 patients with coronary artery disease. The order of perfusion sequences was inverted in every other patient. Image acquisition was started during the administration of a contrast bolus followed by a 20-ml saline flush (3 ml/s), and the next perfusion was started at least 15 min thereafter using an identical bolus. An acceleration rate of 2 was used in both methods, and acquisition was performed during breath-holding. Significantly higher SNR, CNR and image quality were obtained with GRAPPA images than with TSENSE images. GRAPPA, however, did not yield a higher CNR when applied after the second bolus. GRAPPA perfusion imaging produced larger differences between subjects than did TSENSE. Compared to TSENSE, GRAPPA produced significantly better CNR on the first bolus. More consistent SNR and CNR were obtained from TSENSE images than from GRAPPA images, indicating that the diagnostic value of TSENSE may be better.

  16. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  17. Multi-echo acquisition

    PubMed Central

    Posse, Stefan

    2011-01-01

    The rapid development of fMRI was paralleled early on by the adaptation of MR spectroscopic imaging (MRSI) methods to quantify water relaxation changes during brain activation. This review describes the evolution of multi-echo acquisition from high-speed MRSI to multi-echo EPI and beyond. It highlights milestones in the development of multi-echo acquisition methods, such as the discovery of considerable gains in fMRI sensitivity when combining echo images, advances in quantification of the BOLD effect using analytical biophysical modeling and interleaved multi-region shimming. The review conveys the insight gained from combining fMRI and MRSI methods and concludes with recent trends in ultra-fast fMRI, which will significantly increase temporal resolution of multi-echo acquisition. PMID:22056458

  18. 48 CFR 49.112-1 - Partial payments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial payments. 49.112-1 Section 49.112-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS General Principles 49.112-1 Partial payments. (a) General. If the contract...

  19. 48 CFR 49.109-5 - Partial settlements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial settlements. 49.109-5 Section 49.109-5 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS General Principles 49.109-5 Partial settlements. The TCO should...

  20. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  1. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  2. Assessing Partial Knowledge in Vocabulary.

    ERIC Educational Resources Information Center

    Smith, Richard M.

    1987-01-01

    Partial knowledge was assessed in a multiple choice vocabulary test. Test reliability and concurrent validity were compared using Rasch-based dichotomous and polychotomous scoring models. Results supported the polychtomous scoring model, and moderately supported J. O'Connor's theory of vocabulary acquisition. (Author/GDC)

  3. SSC/BCD data acquisition system proposal

    SciTech Connect

    Barsotti, E.; Bowden, M.; Swoboda, C.

    1989-04-01

    The proposed new data acquisition system architecture takes event fragments off a detector over fiber optics and to a parallel event building switch. The parallel event building switch concept, taken from the telephone communications industry, along with expected technology improvements in fiber-optic data transmission speeds over the next few years, should allow data acquisition system rates to increase dramatically and exceed those rates needed for the SSC. This report briefly describes the switch architecture and fiber optics for a SSC data acquisition system.

  4. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  5. Acquisition strategies

    SciTech Connect

    Zimmer, M.J.; Lynch, P.W. )

    1993-11-01

    Acquiring projects takes careful planning, research and consideration. Picking the right opportunities and avoiding the pitfalls will lead to a more valuable portfolio. This article describes the steps to take in evaluating an acquisition and what items need to be considered in an evaluation.

  6. Epilepsy (partial)

    PubMed Central

    2011-01-01

    Introduction About 3% of people will be diagnosed with epilepsy during their lifetime, but about 70% of people with epilepsy eventually go into remission. Methods and outcomes We conducted a systematic review and aimed to answer the following clinical questions: What are the effects of starting antiepileptic drug treatment following a single seizure? What are the effects of drug monotherapy in people with partial epilepsy? What are the effects of additional drug treatments in people with drug-resistant partial epilepsy? What is the risk of relapse in people in remission when withdrawing antiepileptic drugs? What are the effects of behavioural and psychological treatments for people with epilepsy? What are the effects of surgery in people with drug-resistant temporal lobe epilepsy? We searched: Medline, Embase, The Cochrane Library, and other important databases up to July 2009 (Clinical Evidence reviews are updated periodically; please check our website for the most up-to-date version of this review). We included harms alerts from relevant organisations such as the US Food and Drug Administration (FDA) and the UK Medicines and Healthcare products Regulatory Agency (MHRA). Results We found 83 systematic reviews, RCTs, or observational studies that met our inclusion criteria. We performed a GRADE evaluation of the quality of evidence for interventions. Conclusions In this systematic review we present information relating to the effectiveness and safety of the following interventions: antiepileptic drugs after a single seizure; monotherapy for partial epilepsy using carbamazepine, gabapentin, lamotrigine, levetiracetam, phenobarbital, phenytoin, sodium valproate, or topiramate; addition of second-line drugs for drug-resistant partial epilepsy (allopurinol, eslicarbazepine, gabapentin, lacosamide, lamotrigine, levetiracetam, losigamone, oxcarbazepine, retigabine, tiagabine, topiramate, vigabatrin, or zonisamide); antiepileptic drug withdrawal for people with partial or

  7. 48 CFR 19.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Partial set-asides. 19.502... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 19.502-3 Partial set-asides. (a) The contracting officer shall set aside a portion of an acquisition, except for construction,...

  8. 48 CFR 49.208 - Equitable adjustment after partial termination.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Equitable adjustment after partial termination. 49.208 Section 49.208 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Fixed-Price...

  9. 48 CFR 49.304 - Procedure for partial termination.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Procedure for partial termination. 49.304 Section 49.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Cost-Reimbursement...

  10. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  11. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  12. Fast 3D coronary artery contrast-enhanced magnetic resonance angiography with magnetization transfer contrast, fat suppression and parallel imaging as applied on an anthropomorphic moving heart phantom.

    PubMed

    Irwan, Roy; Rüssel, Iris K; Sijens, Paul E

    2006-09-01

    A magnetic resonance sequence for high-resolution imaging of coronary arteries in a very short acquisition time is presented. The technique is based on fast low-angle shot and uses fat saturation and magnetization transfer contrast prepulses to improve image contrast. GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) is implemented to shorten acquisition time. The sequence was tested on a moving anthropomorphic silicone heart phantom where the coronary arteries were filled with a gadolinium contrast agent solution, and imaging was performed at varying heart rates using GRAPPA. The clinical relevance of the phantom was validated by comparing the myocardial relaxation times of the phantom's homogeneous silicone cardiac wall to those of humans. Signal-to-noise ratio and contrast-to-noise ratio were higher when parallel imaging was used, possibly benefiting from the acquisition of one partition per heartbeat. Another advantage of parallel imaging for visualizing the coronary arteries is that the entire heart can be imaged within a few breath-holds.

  13. 48 CFR 819.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Partial set-asides. 819... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 819.502-3 Partial set-asides. When... particular procurement will be partially set aside for small business participation, the solicitation...

  14. 48 CFR 1319.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Partial set-asides. 1319... PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 1319.502-3 Partial set-asides. A partial set... and one small) will respond with offers unless the set-aside is authorized by the designee set...

  15. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  16. A parallel imaging technique using mutual calibration for split-blade diffusion-weighted PROPELLER.

    PubMed

    Li, Zhiqiang; Pipe, James G; Aboussouan, Eric; Karis, John P; Huo, Donglai

    2011-03-01

    Split-blade diffusion-weighted periodically rotated overlapping parallel lines with enhanced reconstruction (DW-PROPELLER) was proposed to address the issues associated with diffusion-weighted echo planar imaging such as geometric distortion and difficulty in high-resolution imaging. The major drawbacks with DW-PROPELLER are its high SAR (especially at 3T) and violation of the Carr-Purcell-Meiboom-Gill condition, which leads to a long scan time and narrow blade. Parallel imaging can reduce scan time and increase blade width; however, it is very challenging to apply standard k-space-based techniques such as GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) to split-blade DW-PROPELLER due to its narrow blade. In this work, a new calibration scheme is proposed for k-space-based parallel imaging method without the need of additional calibration data, which results in a wider, more stable blade. The in vivo results show that this technique is very promising.

  17. Sparse-CAPR: Highly-Accelerated 4D CE-MRA with Parallel Imaging and Nonconvex Compressive Sensing

    PubMed Central

    Trzasko, Joshua D.; Haider, Clifton R.; Borisch, Eric A.; Campeau, Norbert G.; Glockner, James F.; Riederer, Stephen J.; Manduca, Armando

    2012-01-01

    CAPR is a SENSE-type parallel 3DFT acquisition paradigm for 4D contrast-enhanced magnetic resonance angiography (CE-MRA) that has been demonstrated capable of providing high spatial and temporal resolution, diagnostic-quality images at very high acceleration rates. However, CAPR images are typically reconstructed online using Tikhonov regularization and partial Fourier methods, which are prone to exhibit noise amplification and undersampling artifacts when operating at very high acceleration rates. In this work, a sparsity-driven offline reconstruction framework for CAPR is developed and demonstrated to consistently provide improvements over the currently-employed reconstruction strategy against these ill-effects. Moreover, the proposed reconstruction strategy requires no changes to the existing CAPR acquisition protocol, and an efficient numerical optimization and hardware system are described that allow for a 256×160×80 volume CE-MRA volume to be reconstructed from an 8-channel data set in less than two minutes. PMID:21608028

  18. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  19. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  20. The Spirituality of Second Language Acquisition

    ERIC Educational Resources Information Center

    Jackson, Baxter

    2006-01-01

    Parallels between the reconstruction of self in Alcoholics Anonymous and the reconstruction of self in second language acquisition are drawn out and examined in three areas: ego deflation, identification at depth, and mutual assistance. These spiritual principles are shown to be theoretically and empirically supported in SLA literature and…

  1. Syntax acquisition.

    PubMed

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  2. Syntax acquisition.

    PubMed

    Crain, Stephen; Thornton, Rosalind

    2012-03-01

    Every normal child acquires a language in just a few years. By 3- or 4-years-old, children have effectively become adults in their abilities to produce and understand endlessly many sentences in a variety of conversational contexts. There are two alternative accounts of the course of children's language development. These different perspectives can be traced back to the nature versus nurture debate about how knowledge is acquired in any cognitive domain. One perspective dates back to Plato's dialog 'The Meno'. In this dialog, the protagonist, Socrates, demonstrates to Meno, an aristocrat in Ancient Greece, that a young slave knows more about geometry than he could have learned from experience. By extension, Plato's Problem refers to any gap between experience and knowledge. How children fill in the gap in the case of language continues to be the subject of much controversy in cognitive science. Any model of language acquisition must address three factors, inter alia: 1. The knowledge children accrue; 2. The input children receive (often called the primary linguistic data); 3. The nonlinguistic capacities of children to form and test generalizations based on the input. According to the famous linguist Noam Chomsky, the main task of linguistics is to explain how children bridge the gap-Chomsky calls it a 'chasm'-between what they come to know about language, and what they could have learned from experience, even given optimistic assumptions about their cognitive abilities. Proponents of the alternative 'nurture' approach accuse nativists like Chomsky of overestimating the complexity of what children learn, underestimating the data children have to work with, and manifesting undue pessimism about children's abilities to extract information based on the input. The modern 'nurture' approach is often referred to as the usage-based account. We discuss the usage-based account first, and then the nativist account. After that, we report and discuss the findings of several

  3. The Nexus task-parallel runtime system

    SciTech Connect

    Foster, I.; Tuecke, S.; Kesselman, C.

    1994-12-31

    A runtime system provides a parallel language compiler with an interface to the low-level facilities required to support interaction between concurrently executing program components. Nexus is a portable runtime system for task-parallel programming languages. Distinguishing features of Nexus include its support for multiple threads of control, dynamic processor acquisition, dynamic address space creation, a global memory model via interprocessor references, and asynchronous events. In addition, it supports heterogeneity at multiple levels, allowing a single computation to utilize different programming languages, executables, processors, and network protocols. Nexus is currently being used as a compiler target for two task-parallel languages: Fortran M and Compositional C++. In this paper, we present the Nexus design, outline techniques used to implement Nexus on parallel computers, show how it is used in compilers, and compare its performance with that of another runtime system.

  4. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  5. Investigating Second Language Acquisition.

    ERIC Educational Resources Information Center

    Jordens, Peter, Ed.; Lalleman, Josine, Ed.

    Essays in second language acquisition include: "The State of the Art in Second Language Acquisition Research" (Josine Lalleman); "Crosslinguistic Influence with Special Reference to the Acquisition of Grammar" (Michael Sharwood Smith); "Second Language Acquisition by Adult Immigrants: A Multiple Case Study of Turkish and Moroccan Learners of…

  6. Parallel chemistry in the 21st century.

    PubMed

    Long, Alan

    2012-09-01

    The tool chest of techniques, methodologies, and equipment for conducting parallel chemistry is larger than ever before. Improvements in the laboratory and developments in computational chemistry have enabled compound library design at the desks of medicinal chemists. This unit includes a brief background in combinatorial/parallel synthesis chemistry, along with a discussion of evolving technologies for both solid- and solution-phase chemistry. In addition, there are discussions on designing compound libraries, acquisition/procurement of compounds and/or reagents, the chemistry and equipment used for chemical production, purification, sample handling, and data analysis.

  7. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  8. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  9. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  10. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  11. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  12. Color Vision Deficits and Literacy Acquisition.

    ERIC Educational Resources Information Center

    Hurley, Sandra Rollins

    1994-01-01

    Shows that color blindness, whether partial or total, inhibits literacy acquisition. Offers a case study of a third grader with impaired color vision. Presents a review of literature on the topic. Notes that people with color vision deficits are often unaware of the handicap. (RS)

  13. Acquisition of Three Word Knowledge Aspects through Reading

    ERIC Educational Resources Information Center

    Daskalovska, Nina

    2016-01-01

    A number of studies have shown that second or foreign language learners can acquire vocabulary through reading. The aim of the study was to investigate (a) the effects of reading an authentic novel on the acquisition of 3 aspects of word knowledge: spelling, meaning, and collocation; (b) the influence of reading on the acquisition of partial and…

  14. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  15. Numerical experiments with a parallel conjugate gradient method

    SciTech Connect

    Oppe, T.C.; Kincaid, D.R.

    1987-04-01

    A parallel version of the conjugate gradient method introduced by Seager is implemented using various Cray multitasking tools. The parallel algorithm is used to solve a model partial differential equation on the unit square for various mesh sizes. Speed-up factors are given, and the effects of bank conflicts are noted. 8 refs., 10 figs.

  16. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  17. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  18. Surface acquisition through virtual milling

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Surface acquisition deals with the reconstruction of three dimensional objects from a set of data points. The most straightforward techniques require human intervention, a time consuming proposition. It is desirable to develop a fully automated alternative. Such a method is proposed in this paper. It makes use of surface measurements obtained from a 3-D laser digitizer - an instrument which provides the (x,y,z) coordinates of surface data points from various viewpoints. These points are assembled into several partial surfaces using a visibility constraint and a 2-D triangulation technique. Reconstruction of the final object requires merging these partial surfaces. This is accomplished through a procedure that emulates milling, a standard machining operation. From a geometrical standpoint the problem reduces to constructing the intersection of two or more non-convex polyhedra.

  19. A high speed buffer for LV data acquisition

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.; Sterlina, Patrick S.; Clemmons, James I., Jr.; Meyers, James F.

    1987-01-01

    The laser velocimeter (autocovariance) buffer interface is a data acquisition subsystem designed specifically for the acquisition of data from a laser velocimeter. The subsystem acquires data from up to six laser velocimeter components in parallel, measures the times between successive data points for each of the components, establishes and maintains a coincident condition between any two or three components, and acquires data from other instrumentation systems simultaneously with the laser velocimeter data points. The subsystem is designed to control the entire data acquisition process based on initial setup parameters obtained from a host computer and to be independent of the computer during the acquisition. On completion of the acquisition cycle, the interface transfers the contents of its memory to the host under direction of the host via a single 16-bit parallel DMA channel.

  20. Language Acquisition without an Acquisition Device

    ERIC Educational Resources Information Center

    O'Grady, William

    2012-01-01

    Most explanatory work on first and second language learning assumes the primacy of the acquisition phenomenon itself, and a good deal of work has been devoted to the search for an "acquisition device" that is specific to humans, and perhaps even to language. I will consider the possibility that this strategy is misguided and that language…

  1. Survival of the Partial Reinforcement Extinction Effect after Contextual Shifts

    ERIC Educational Resources Information Center

    Boughner, Robert L.; Papini, Mauricio R.

    2006-01-01

    The effects of contextual shifts on the partial reinforcement extinction effect (PREE) were studied in autoshaping with rats. Experiment 1 established that the two contexts used subsequently were easily discriminable and equally salient. In Experiment 2, independent groups of rats received acquisition training under partial reinforcement (PRF) or…

  2. 48 CFR 219.502-3 - Partial set-asides.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Partial set-asides. 219..., DEPARTMENT OF DEFENSE SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS Set-Asides for Small Business 219.502-3 Partial set-asides. (c)(1) If the North American Industry Classification System Industry Subsector of...

  3. Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging.

    PubMed

    Truong, Trong-Kha; Song, Allen W; Chen, Nan-Kuei

    2015-01-01

    In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.

  4. Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging.

    PubMed

    Truong, Trong-Kha; Song, Allen W; Chen, Nan-Kuei

    2015-01-01

    In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed. PMID:26413505

  5. What is the possible contribution of Ca2+-stimulated adenylate cyclase to acquisition, consolidation and retention of an associative olfactory memory in Drosophila.

    PubMed

    Dudai, Y; Corfas, G; Hazvi, S

    1988-01-01

    We have quantitatively analyzed the effect of the mutation rut, which lesions a Ca2+-stimulated subpopulation (or functional state) of adenylate cyclase, on acquisition, consolidation and retention of an olfactory associative memory in Drosophila. The classical conditioning paradigm developed by Tully and Quinn (1985) was employed. Our data indicate that rut reduces acquisition and short-term memory in this paradigm, yet does not abolish consolidation of residual memory into an anesthesia-resistant form. Assuming that the rut behavioral defect is not due to altered neuroanatomy, the data also suggest that the adenylate cyclase activity lesioned by rut is only one of the molecular processes required for acquisition and short-term memory. These different postulated processes seem to act in parallel but are probably recruited sequentially; the mechanism involving rut+ gene product is necessary for response prior to other mechanisms which do not require rut+. It is also suggested, on the basis of the present results combined with previous data, that processes which do not require Ca2+-activated cyclase can not fulfill the partial role of this enzyme during acquisition but can partially compensate for its absence in later phases of memory formation. PMID:3127581

  6. 48 CFR 52.219-7 - Notice of Partial Small Business Set-Aside.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Business Set-Aside. 52.219-7 Section 52.219-7 Federal Acquisition Regulations System FEDERAL ACQUISITION... Clauses 52.219-7 Notice of Partial Small Business Set-Aside. As prescribed in 19.508(d), insert the following clause: Notice of Partial Small Business Set-Aside (JUN 2003) (a) Definitions. Small...

  7. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  8. 48 CFR 49.603-2 - Fixed-price contracts-partial termination.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Fixed-price contracts-partial termination. 49.603-2 Section 49.603-2 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Contract Termination Forms and Formats 49.603-2...

  9. 48 CFR 49.603-2 - Fixed-price contracts-partial termination.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Fixed-price contracts-partial termination. 49.603-2 Section 49.603-2 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Contract Termination Forms and Formats 49.603-2...

  10. The HyperCP data acquisition system

    SciTech Connect

    Kaplan, D.M.; E871 Collaboration

    1997-06-01

    For the HyperCP experiment at Fermilab, we have assembled a data acquisition system that records on up to 45 Exabyte 8505 tape drives in parallel at up to 17 MB/s. During the beam spill, data are acquired from the front-end digitization systems at {approx} 60 MB/s via five parallel data paths. The front-end systems achieve typical readout deadtime of {approx} 1 {micro}s per event, allowing operation at 75-kHz trigger rate with {approx_lt}30% deadtime. Event building and tapewriting are handled by 15 Motorola MVME167 processors in 5 VME crates.

  11. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  12. Parallel preconditioning techniques for sparse CG solvers

    SciTech Connect

    Basermann, A.; Reichel, B.; Schelthoff, C.

    1996-12-31

    Conjugate gradient (CG) methods to solve sparse systems of linear equations play an important role in numerical methods for solving discretized partial differential equations. The large size and the condition of many technical or physical applications in this area result in the need for efficient parallelization and preconditioning techniques of the CG method. In particular for very ill-conditioned matrices, sophisticated preconditioner are necessary to obtain both acceptable convergence and accuracy of CG. Here, we investigate variants of polynomial and incomplete Cholesky preconditioners that markedly reduce the iterations of the simply diagonally scaled CG and are shown to be well suited for massively parallel machines.

  13. Parallel MRI at microtesla fields.

    PubMed

    Zotev, Vadim S; Volegov, Petr L; Matlashov, Andrei N; Espy, Michelle A; Mosher, John C; Kraus, Robert H

    2008-06-01

    Parallel imaging techniques have been widely used in high-field magnetic resonance imaging (MRI). Multiple receiver coils have been shown to improve image quality and allow accelerated image acquisition. Magnetic resonance imaging at ultra-low fields (ULF MRI) is a new imaging approach that uses SQUID (superconducting quantum interference device) sensors to measure the spatially encoded precession of pre-polarized nuclear spin populations at microtesla-range measurement fields. In this work, parallel imaging at microtesla fields is systematically studied for the first time. A seven-channel SQUID system, designed for both ULF MRI and magnetoencephalography (MEG), is used to acquire 3D images of a human hand, as well as 2D images of a large water phantom. The imaging is performed at 46 mu T measurement field with pre-polarization at 40 mT. It is shown how the use of seven channels increases imaging field of view and improves signal-to-noise ratio for the hand images. A simple procedure for approximate correction of concomitant gradient artifacts is described. Noise propagation is analyzed experimentally, and the main source of correlated noise is identified. Accelerated imaging based on one-dimensional undersampling and 1D SENSE (sensitivity encoding) image reconstruction is studied in the case of the 2D phantom. Actual threefold imaging acceleration in comparison to single-average fully encoded Fourier imaging is demonstrated. These results show that parallel imaging methods are efficient in ULF MRI, and that imaging performance of SQUID-based instruments improves substantially as the number of channels is increased.

  14. Parallel MRI at microtesla fields

    NASA Astrophysics Data System (ADS)

    Zotev, Vadim S.; Volegov, Petr L.; Matlashov, Andrei N.; Espy, Michelle A.; Mosher, John C.; Kraus, Robert H.

    2008-06-01

    Parallel imaging techniques have been widely used in high-field magnetic resonance imaging (MRI). Multiple receiver coils have been shown to improve image quality and allow accelerated image acquisition. Magnetic resonance imaging at ultra-low fields (ULF MRI) is a new imaging approach that uses SQUID (superconducting quantum interference device) sensors to measure the spatially encoded precession of pre-polarized nuclear spin populations at microtesla-range measurement fields. In this work, parallel imaging at microtesla fields is systematically studied for the first time. A seven-channel SQUID system, designed for both ULF MRI and magnetoencephalography (MEG), is used to acquire 3D images of a human hand, as well as 2D images of a large water phantom. The imaging is performed at 46 μT measurement field with pre-polarization at 40 mT. It is shown how the use of seven channels increases imaging field of view and improves signal-to-noise ratio for the hand images. A simple procedure for approximate correction of concomitant gradient artifacts is described. Noise propagation is analyzed experimentally, and the main source of correlated noise is identified. Accelerated imaging based on one-dimensional undersampling and 1D SENSE (sensitivity encoding) image reconstruction is studied in the case of the 2D phantom. Actual threefold imaging acceleration in comparison to single-average fully encoded Fourier imaging is demonstrated. These results show that parallel imaging methods are efficient in ULF MRI, and that imaging performance of SQUID-based instruments improves substantially as the number of channels is increased.

  15. Eclipse Parallel Tools Platform

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  16. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  17. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  18. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  19. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  20. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.

  1. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.

  2. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  3. Partial (focal) seizure

    MedlinePlus

    ... Jacksonian seizure; Seizure - partial (focal); Temporal lobe seizure; Epilepsy - partial seizures ... Abou-Khalil BW, Gallagher MJ, Macdonald RL. Epilepsies. In: Daroff ... Practice . 7th ed. Philadelphia, PA: Elsevier; 2016:chap 101. ...

  4. Partial tooth gear bearings

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    2010-01-01

    A partial gear bearing including an upper half, comprising peak partial teeth, and a lower, or bottom, half, comprising valley partial teeth. The upper half also has an integrated roller section between each of the peak partial teeth with a radius equal to the gear pitch radius of the radially outwardly extending peak partial teeth. Conversely, the lower half has an integrated roller section between each of the valley half teeth with a radius also equal to the gear pitch radius of the peak partial teeth. The valley partial teeth extend radially inwardly from its roller section. The peak and valley partial teeth are exactly out of phase with each other, as are the roller sections of the upper and lower halves. Essentially, the end roller bearing of the typical gear bearing has been integrated into the normal gear tooth pattern.

  5. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  6. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  7. FTMP data acquisition environment

    NASA Technical Reports Server (NTRS)

    Padilla, Peter A.

    1988-01-01

    The Fault-Tolerant Multi-Processing (FTMP) test-bed data acquisition environment is described. The performance of two data acquisition devices available in the test environment are estimated and compared. These estimated data rates are used as measures of the devices' capabilities. A new data acquisition device was developed and added to the FTMP environment. This path increases the data rate available by approximately a factor of 8, to 379 KW/S, while simplifying the experiment development process.

  8. Streamlined acquisition handbook

    NASA Technical Reports Server (NTRS)

    1990-01-01

    NASA has always placed great emphasis on the acquisition process, recognizing it as among its most important activities. This handbook is intended to facilitate the application of streamlined acquisition procedures. The development of these procedures reflects the efforts of an action group composed of NASA Headquarters and center acquisition professionals. It is the intent to accomplish the real change in the acquisition process as a result of this effort. An important part of streamlining the acquisition process is a commitment by the people involved in the process to accomplishing acquisition activities quickly and with high quality. Too often we continue to accomplish work in 'the same old way' without considering available alternatives which would require no changes to regulations, approvals from Headquarters, or waivers of required practice. Similarly, we must be sensitive to schedule opportunities throughout the acquisition cycle, not just once the purchase request arrives at the procurement office. Techniques that have been identified as ways of reducing acquisition lead time while maintaining high quality in our acquisition process are presented.

  9. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  10. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  11. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  12. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  13. Artificial intelligence in parallel

    SciTech Connect

    Waldrop, M.M.

    1984-08-10

    The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

  14. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  15. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  16. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  17. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  18. 48 CFR 1852.228-81 - Insurance-Partial Immunity From Tort Liability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Insurance-Partial Immunity... Provisions and Clauses 1852.228-81 Insurance—Partial Immunity From Tort Liability. As prescribed in 1828.311-270(c), insert the following clause: Insurance—Partial Immunity From Tort Liability (SEP 2000)...

  19. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  20. Multiple channel data acquisition system

    DOEpatents

    Crawley, H. Bert; Rosenberg, Eli I.; Meyer, W. Thomas; Gorbics, Mark S.; Thomas, William D.; McKay, Roy L.; Homer, Jr., John F.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler.

  1. Multiple channel data acquisition system

    DOEpatents

    Crawley, H.B.; Rosenberg, E.I.; Meyer, W.T.; Gorbics, M.S.; Thomas, W.D.; McKay, R.L.; Homer, J.F. Jr.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler. 25 figs.

  2. Acquisition of teleological descriptions

    NASA Astrophysics Data System (ADS)

    Franke, David W.

    1992-03-01

    Teleology descriptions capture the purpose of an entity, mechanism, or activity with which they are associated. These descriptions can be used in explanation, diagnosis, and design reuse. We describe a technique for acquiring teleological descriptions expressed in the teleology language TeD. Acquisition occurs during design by observing design modifications and design verification. We demonstrate the acquisition technique in an electronic circuit design.

  3. Quick Ride: Acquisition Overview

    NASA Technical Reports Server (NTRS)

    Adams, W. James

    1999-01-01

    Quick Ride is an outgrowth of rapid spacecraft acquisition. It provides a variety of low-cost, short lead time satellite rides for science instruments. Task order contracts with commercial firms will permit placing a order within 30 days. Secondary objectives include a demonstration of a FAR Part 12 commercial acquisition and the exploration of the use of on-ramps.

  4. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  5. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  6. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1994-04-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.

  7. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1997-05-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, the authors describe these different parallel algorithms and report on computational experiments that they have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations on a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. The authors focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but they also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional fast Fourier transforms (FFTs) and other parallel transforms.

  8. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  9. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  10. Parallel time integration software

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  11. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  12. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  13. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  14. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  15. Resistance to extinction after schedules of partial delay or partial reinforcement in rats with hippocampal lesions.

    PubMed

    Rawlins, J N; Feldon, J; Ursin, H; Gray, J A

    1985-01-01

    Two experimental procedures were employed to establish the reason why hippocampal lesions apparently block the development of tolerance for aversive events in partial reinforcement experiments, but do not do so in partial punishment experiments. Rats were trained to run in a straight alley following hippocampal lesions (HC), cortical control lesions (CC) or sham operations (SO), and resistance to extinction was assessed following differing acquisition conditions. In Experiment 1 a 4-8 min inter-trial interval (ITI) was used. Either every acquisition trial was rewarded immediately (Continuous Reinforcement, CR), or only a randomly selected half of the trials were immediately rewarded, the reward being delayed for thirty seconds on the other trials (Partial Delay, PD). This delay procedure produced increased resistance to extinction in rats in all lesion groups. In Experiment 2 the ITI was reduced to a few seconds, and rats were trained either on a CR schedule, or on a schedule in which only half the trials were rewarded (Partial Reinforcement, PR). This form of partial reinforcement procedure also produced increased resistance to extinction in rats in all lesion groups. It thus appears that hippocampal lesions only prevent the development of resistance to aversive events when the interval between aversive and subsequent appetitive events exceeds some minimum value. PMID:4029302

  16. Parallel programming with Ada

    SciTech Connect

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  17. SPINning parallel systems software.

    SciTech Connect

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-03-15

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin.

  18. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  19. STIS target acquisition

    NASA Technical Reports Server (NTRS)

    Kraemer, Steve; Downes, Ron; Katsanis, Rocio; Crenshaw, Mike; McGrath, Melissa; Robinson, Rich

    1997-01-01

    We describe the STIS autonomous target acquisition capabilities. We also present the results of dedicated tests executed as part of Cycle 7 calibration, following post-launch improvements to the Space Telescope Imaging Spectrograph (STIS) flight software. The residual pointing error from the acquisitions are < 0.5 CCD pixels, which is better than preflight estimates. Execution of peakups show clear improvement of target centering for slits of width 0.1 sec or smaller. These results may be used by Guest Observers in planning target acquisitions for their STIS programs.

  20. Interactive knowledge acquisition tools

    NASA Technical Reports Server (NTRS)

    Dudziak, Martin J.; Feinstein, Jerald L.

    1987-01-01

    The problems of designing practical tools to aid the knowledge engineer and general applications used in performing knowledge acquisition tasks are discussed. A particular approach was developed for the class of knowledge acquisition problem characterized by situations where acquisition and transformation of domain expertise are often bottlenecks in systems development. An explanation is given on how the tool and underlying software engineering principles can be extended to provide a flexible set of tools that allow the application specialist to build highly customized knowledge-based applications.

  1. Acquisition signal transmitter

    NASA Technical Reports Server (NTRS)

    Friedman, Morton L. (Inventor)

    1989-01-01

    An encoded information transmitter which transmits a radio frequency carrier that is amplitude modulated by a constant frequency waveform and thereafter amplitude modulated by a predetermined encoded waveform, the constant frequency waveform modulated carrier constituting an acquisition signal and the encoded waveform modulated carrier constituting an information bearing signal, the acquisition signal providing enhanced signal acquisition and interference rejection favoring the information bearing signal. One specific application for this transmitter is as a distress transmitter where a conventional, legislated audio tone modulated signal is transmitted followed first by the acquisition signal and then the information bearing signal, the information bearing signal being encoded with, among other things, vehicle identification data. The acquistion signal enables a receiver to acquire the information bearing signal where the received signal is low and/or where the received signal has a low signal-to-noise ratio in an environment where there are multiple signals in the same frequency band as the information bearing signal.

  2. Defense acquisition programs

    SciTech Connect

    Not Available

    1990-06-01

    The continuing instability in the overall defense budget and the recent changes in Eastern Europe are forcing DOD and the military services to reexamine the need, priority, and annual funding levels for many weapon system acquisition programs. GAO reviewed six weapon system acquisition programs that DOD was scheduled to make an acquisition milestone decision on during fiscal year 1991. Under milestone authorization, up to five years funding can be approved to cover the entire acquisition phase for either full-scale development or full-rate production. This report examines the Non-Line-of-Sight Missile, the Light Helicopter, the MK-50 Torpedo, the Sensor Fuzed Weapon, the Advanced Tactical Fighter, and the Joint Tactical Information Distribution System Class 2 Terminals.

  3. Documentation and knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel; Moseley, Warren

    1990-01-01

    Traditional approaches to knowledge acquisition have focused on interviews. An alternative focuses on the documentation associated with a domain. Adopting a documentation approach provides some advantages during familiarization. A knowledge management tool was constructed to gain these advantages.

  4. Airborne data acquisition techniques

    SciTech Connect

    Arro, A.A.

    1980-01-01

    The introduction of standards on acceptable procedures for assessing building heat loss has created a dilemma for the contractor performing airborne thermographic surveys. These standards impose specifications on instrumentation, data acquisition, recording, interpretation, and presentation. Under the standard, the contractor has both the obligation of compliance and the requirement of offering his services at a reasonable price. This paper discusses the various aspects of data acquisition for airborne thermographic surveys and various techniques to reduce the costs of this operation. These techniques include the calculation of flight parameters for economical data acquisition, the selection and use of maps for mission planning, and the use of meteorological forecasts for flight scheduling and the actual execution of the mission. The proper consideration of these factors will result in a cost effective data acquisition and will place the contractor in a very competitive position in offering airborne thermographic survey services.

  5. Data acquisition system

    DOEpatents

    Shapiro, Stephen L.; Mani, Sudhindra; Atlas, Eugene L.; Cords, Dieter H. W.; Holbrook, Britt

    1997-01-01

    A data acquisition circuit for a particle detection system that allows for time tagging of particles detected by the system. The particle detection system screens out background noise and discriminate between hits from scattered and unscattered particles. The detection system can also be adapted to detect a wide variety of particle types. The detection system utilizes a particle detection pixel array, each pixel containing a back-biased PIN diode, and a data acquisition pixel array. Each pixel in the particle detection pixel array is in electrical contact with a pixel in the data acquisition pixel array. In response to a particle hit, the affected PIN diodes generate a current, which is detected by the corresponding data acquisition pixels. This current is integrated to produce a voltage across a capacitor, the voltage being related to the amount of energy deposited in the pixel by the particle. The current is also used to trigger a read of the pixel hit by the particle.

  6. Parallel Total Energy

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  7. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  8. Parallel Multigrid Equation Solver

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  9. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  10. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  11. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  12. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  13. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  14. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  15. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  16. Parallel Dislocation Simulator

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  17. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, Dario B.

    1994-01-01

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.

  18. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  19. Switch for serial or parallel communication networks

    DOEpatents

    Crosette, D.B.

    1994-07-19

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.

  20. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  1. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  2. Twisted partially pure spinors

    NASA Astrophysics Data System (ADS)

    Herrera, Rafael; Tellez, Ivan

    2016-08-01

    Motivated by the relationship between orthogonal complex structures and pure spinors, we define twisted partially pure spinors in order to characterize spinorially subspaces of Euclidean space endowed with a complex structure.

  3. Parallel Consensual Neural Networks

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

    1993-01-01

    A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

  4. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  5. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  6. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  7. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  8. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  9. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  10. Collisionless parallel shocks

    NASA Technical Reports Server (NTRS)

    Khabibrakhmanov, I. KH.; Galeev, A. A.; Galinskii, V. L.

    1993-01-01

    Consideration is given to a collisionless parallel shock based on solitary-type solutions of the modified derivative nonlinear Schroedinger equation (MDNLS) for parallel Alfven waves. The standard derivative nonlinear Schroedinger equation is generalized in order to include the possible anisotropy of the plasma distribution and higher-order Korteweg-de Vies-type dispersion. Stationary solutions of MDNLS are discussed. The anisotropic nature of 'adiabatic' reflections leads to the asymmetric particle distribution in the upstream as well as in the downstream regions of the shock. As a result, nonzero heat flux appears near the front of the shock. It is shown that this causes the stochastic behavior of the nonlinear waves, which can significantly contribute to the shock thermalization.

  11. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  12. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  13. 48 CFR 352.234-4 - Partial earned value management system.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Partial earned value management system. 352.234-4 Section 352.234-4 Federal Acquisition Regulations System HEALTH AND HUMAN SERVICES CLAUSES AND FORMS SOLICITATION PROVISIONS AND CONTRACT CLAUSES Texts of Provisions and Clauses...

  14. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  15. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  16. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  17. Asynchronous interpretation of parallel microprograms

    SciTech Connect

    Bandman, O.L.

    1984-03-01

    In this article, the authors demonstrate how to pass from a given synchronous interpretation of a parallel microprogram to an equivalent asynchronous interpretation, and investigate the cost associated with the rejection of external synchronization in parallel microprogram structures.

  18. On Shaft Data Acquisition System (OSDAS)

    NASA Technical Reports Server (NTRS)

    Pedings, Marc; DeHart, Shawn; Formby, Jason; Naumann, Charles

    2012-01-01

    On Shaft Data Acquisition System (OSDAS) is a rugged, compact, multiple-channel data acquisition computer system that is designed to record data from instrumentation while operating under extreme rotational centrifugal or gravitational acceleration forces. This system, which was developed for the Heritage Fuel Air Turbine Test (HFATT) program, addresses the problem of recording multiple channels of high-sample-rate data on most any rotating test article by mounting the entire acquisition computer onboard with the turbine test article. With the limited availability of slip ring wires for power and communication, OSDAS utilizes its own resources to provide independent power and amplification for each instrument. Since OSDAS utilizes standard PC technology as well as shared code interfaces with the next-generation, real-time health monitoring system (SPARTAA Scalable Parallel Architecture for Real Time Analysis and Acquisition), this system could be expanded beyond its current capabilities, such as providing advanced health monitoring capabilities for the test article. High-conductor-count slip rings are expensive to purchase and maintain, yet only provide a limited number of conductors for routing instrumentation off the article and to a stationary data acquisition system. In addition to being limited to a small number of instruments, slip rings are prone to wear quickly, and introduce noise and other undesirable characteristics to the signal data. This led to the development of a system capable of recording high-density instrumentation, at high sample rates, on the test article itself, all while under extreme rotational stress. OSDAS is a fully functional PC-based system with 48 channels of 24-bit, high-sample-rate input channels, phase synchronized, with an onboard storage capacity of over 1/2-terabyte of solid-state storage. This recording system takes a novel approach to the problem of recording multiple channels of instrumentation, integrated with the test

  19. Variance Components: Partialled vs. Common.

    ERIC Educational Resources Information Center

    Curtis, Ervin W.

    1985-01-01

    A new approach to partialling components is used. Like conventional partialling, this approach orthogonalizes variables by partitioning the scores or observations. Unlike conventional partialling, it yields a common component and two unique components. (Author/GDC)

  20. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  1. Parallelized nested sampling

    NASA Astrophysics Data System (ADS)

    Henderson, R. Wesley; Goggans, Paul M.

    2014-12-01

    One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

  2. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, S.; Kumar, R.; Krumpelt, M.

    1999-08-17

    A partial oxidation reformer is described comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell. 7 figs.

  3. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    1999-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  4. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, Shabbir; Kumar, Romesh; Krumpelt, Michael

    2001-01-01

    A partial oxidation reformer comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell.

  5. Methanol partial oxidation reformer

    DOEpatents

    Ahmed, S.; Kumar, R.; Krumpelt, M.

    1999-08-24

    A partial oxidation reformer is described comprising a longitudinally extending chamber having a methanol, water and an air inlet and an outlet. An igniter mechanism is near the inlets for igniting a mixture of methanol and air, while a partial oxidation catalyst in the chamber is spaced from the inlets and converts methanol and oxygen to carbon dioxide and hydrogen. Controlling the oxygen to methanol mole ratio provides continuous slightly exothermic partial oxidation reactions of methanol and air producing hydrogen gas. The liquid is preferably injected in droplets having diameters less than 100 micrometers. The reformer is useful in a propulsion system for a vehicle which supplies a hydrogen-containing gas to the negative electrode of a fuel cell. 7 figs.

  6. Oxygen partial pressure sensor

    DOEpatents

    Dees, D.W.

    1994-09-06

    A method for detecting oxygen partial pressure and an oxygen partial pressure sensor are provided. The method for measuring oxygen partial pressure includes contacting oxygen to a solid oxide electrolyte and measuring the subsequent change in electrical conductivity of the solid oxide electrolyte. A solid oxide electrolyte is utilized that contacts both a porous electrode and a nonporous electrode. The electrical conductivity of the solid oxide electrolyte is affected when oxygen from an exhaust stream permeates through the porous electrode to establish an equilibrium of oxygen anions in the electrolyte, thereby displacing electrons throughout the electrolyte to form an electron gradient. By adapting the two electrodes to sense a voltage potential between them, the change in electrolyte conductivity due to oxygen presence can be measured. 1 fig.

  7. Oxygen partial pressure sensor

    DOEpatents

    Dees, Dennis W.

    1994-01-01

    A method for detecting oxygen partial pressure and an oxygen partial pressure sensor are provided. The method for measuring oxygen partial pressure includes contacting oxygen to a solid oxide electrolyte and measuring the subsequent change in electrical conductivity of the solid oxide electrolyte. A solid oxide electrolyte is utilized that contacts both a porous electrode and a nonporous electrode. The electrical conductivity of the solid oxide electrolyte is affected when oxygen from an exhaust stream permeates through the porous electrode to establish an equilibrium of oxygen anions in the electrolyte, thereby displacing electrons throughout the electrolyte to form an electron gradient. By adapting the two electrodes to sense a voltage potential between them, the change in electrolyte conductivity due to oxygen presence can be measured.

  8. 75 FR 25844 - Class Deviation From FAR 52.219-7, Notice of Partial Small Business Set-Aside

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-10

    ... of the Secretary Class Deviation From FAR 52.219-7, Notice of Partial Small Business Set-Aside AGENCY... class deviation to the Federal Acquisition Regulation (FAR) regarding partial small business set-asides... Clause 52.219-7, Notice of Partial Small Business Set-Aside. DESC intends to use the clause in...

  9. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  10. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  11. Parallel Kinematic Machines (PKM)

    SciTech Connect

    Henry, R.S.

    2000-03-17

    The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

  12. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  13. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  14. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  15. Dynamics of partial control.

    PubMed

    Sabuco, Juan; Sanjuán, Miguel A F; Yorke, James A

    2012-12-01

    Safe sets are a basic ingredient in the strategy of partial control of chaotic systems. Recently we have found an algorithm, the sculpting algorithm, which allows us to construct them, when they exist. Here we define another type of set, an asymptotic safe set, to which trajectories are attracted asymptotically when the partial control strategy is applied. We apply all these ideas to a specific example of a Duffing oscillator showing the geometry of these sets in phase space. The software for creating all the figures appearing in this paper is available as supplementary material. PMID:23278093

  16. Merger and acquisition medicine.

    PubMed

    Powell, G S

    1997-01-01

    This discussion of the ramifications of corporate mergers and acquisitions for employees recognizes that employee adaptation to the change can be a long and complex process. The author describes a role the occupational physician can take in helping to minimize the potential adverse health impact of major organizational change.

  17. Acquisitions List No. 42.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    The "Acquisitions List" of demographic books and articles is issued every two months by the Katharine Dexter McCormick Library. Divided into two parts, the first contains a list of books most recently acquired by the Library, each one annotated and also marked with the Library call number. The second part consists of a list of annotated articles,…

  18. Acquisitions List No. 43.

    ERIC Educational Resources Information Center

    Planned Parenthood--World Population, New York, NY. Katherine Dexter McCormick Library.

    The "Acquisitions List" of demographic books and articles is issued every two months by the Katharine Dexter McCormick Library. Divided into two parts, the first contains a list of books most recently acquired by the Library, each one annotated and also marked with the Library call number. The second part consists of a list of annotated articles,…

  19. First Language Acquisition.

    ERIC Educational Resources Information Center

    Clark, Eve V.

    This book examines children's acquisition of a first language, the stages they go through, and how they use language as they learn. There are 16 chapters in 4 parts. After chapter 1, "Acquiring Languages: Issues and Questions," Part 1, "Getting Started," offers (2) "In Conversation with Children," (3) "Starting on Language: Perception," (4) "Early…

  20. Telecommunications and data acquisition

    NASA Technical Reports Server (NTRS)

    Renzetti, N. A. (Editor)

    1981-01-01

    Deep Space Network progress in flight project support, tracking and data acquisition research and technology, network engineering, hardware and software implementation, and operations is reported. In addition, developments in Earth based radio technology as applied to geodynamics, astrophysics, and the radio search for extraterrestrial intelligence are reported.

  1. [Acquisition of arithmetic knowledge].

    PubMed

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3). PMID:18198117

  2. Acquisition of Comparison Constructions

    ERIC Educational Resources Information Center

    Hohaus, Vera; Tiemann, Sonja; Beck, Sigrid

    2014-01-01

    This article presents a study on the time course of the acquisition of comparison constructions. The order in which comparison constructions (comparatives, measure phrases, superlatives, degree questions, etc.) show up in English- and German-learning children's spontaneous speech is quite fixed. It is shown to be insufficiently determined by…

  3. Data Acquisition Backend

    SciTech Connect

    Britton Jr., Charles L.; Ezell, N. Dianne Bull; Roberts, Michael

    2013-10-01

    This document is intended to summarize the development and testing of the data acquisition module portion of the Johnson Noise Thermometry (JNT) system developed at ORNL. The proposed system has been presented in an earlier report [1]. A more extensive project background including the project rationale is available in the initial project report [2].

  4. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  5. Second Language Acquisition.

    ERIC Educational Resources Information Center

    McLaughlin, Barry; Harrington, Michael

    1989-01-01

    A distinction is drawn between representational and processing models of second-language acquisition. The first approach is derived primarily from linguistics, the second from psychology. Both fields, it is argued, need to collaborate more fully, overcoming disciplinary narrowness in order to achieve more fruitful research. (GLR)

  6. Partial Arc Curvilinear Direct Drive Servomotor

    NASA Technical Reports Server (NTRS)

    Sun, Xiuhong (Inventor)

    2014-01-01

    A partial arc servomotor assembly having a curvilinear U-channel with two parallel rare earth permanent magnet plates facing each other and a pivoted ironless three phase coil armature winding moves between the plates. An encoder read head is fixed to a mounting plate above the coil armature winding and a curvilinear encoder scale is curved to be co-axis with the curvilinear U-channel permanent magnet track formed by the permanent magnet plates. Driven by a set of miniaturized power electronics devices closely looped with a positioning feedback encoder, the angular position and velocity of the pivoted payload is programmable and precisely controlled.

  7. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  8. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  9. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  10. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  11. Coordinating Council. Seventh Meeting: Acquisitions

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The theme for this NASA Scientific and Technical Information Program Coordinating Council meeting was Acquisitions. In addition to NASA and the NASA Center for AeroSpace Information (CASI) presentations, the report contains fairly lengthy visuals about acquisitions at the Defense Technical Information Center. CASI's acquisitions program and CASI's proactive acquisitions activity were described. There was a presentation on the document evaluation process at CASI. A talk about open literature scope and coverage at the American Institute of Aeronautics and Astronautics was also given. An overview of the STI Program's Acquisitions Experts Committee was given next. Finally acquisitions initiatives of the NASA STI program were presented.

  12. 78 FR 37164 - Land Acquisitions: Appeals of Land Acquisition Decisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-20

    ... the full address. In proposed rule FR Doc. 2013-12708, published in the issue of May 29, 2013, make...; Docket ID: BIA-2013-0005] RIN 1076-AF15 Land Acquisitions: Appeals of Land Acquisition Decisions...

  13. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  14. Partial hue-matching.

    PubMed

    Logvinenko, Alexander D; Beattie, Lesley L

    2011-01-01

    It is widely believed that color can be decomposed into a small number of component colors. Particularly, each hue can be described as a combination of a restricted set of component hues. Methods, such as color naming and hue scaling, aim at describing color in terms of the relative amount of the component hues. However, there is no consensus on the nomenclature of component hues. Moreover, the very notion of hue (not to mention component hue) is usually defined verbally rather than perceptually. In this paper, we make an attempt to operationalize such a fundamental attribute of color as hue without the use of verbal terms. Specifically, we put forth a new method--partial hue-matching--that is based on judgments of whether two colors have some hue in common. It allows a set of component hues to be established objectively, without resorting to verbal definitions. Specifically, the largest sets of color stimuli, all of which partially match each other (referred to as chromaticity classes), can be derived from the observer's partial hue-matches. A chromaticity class proves to consist of all color stimuli that contain a particular component hue. Thus, the chromaticity classes fully define the set of component hues. Using samples of Munsell papers, a few experiments on partial hue-matching were carried out with twelve inexperienced normal trichromatic observers. The results reinforce the classical notion of four component hues (yellow, blue, red, and green). Black and white (but not gray) were also found to be component colors. PMID:21742961

  15. Partial knee replacement

    MedlinePlus

    ... You will need to understand what surgery and recovery will be like. Partial knee arthroplasty may be a good choice if you have arthritis in only one side or part of the knee and: You are older, thin, and not very active. You do not ...

  16. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  17. 48 CFR 352.234-4 - Partial earned value management system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... management system. 352.234-4 Section 352.234-4 Federal Acquisition Regulations System HEALTH AND HUMAN....234-4 Partial earned value management system. As prescribed in 334.203-70(d), the Contracting Officer shall insert the following clause: Partial Earned Value Management System (October 2008) (a)...

  18. On the structure of parallelism in a highly concurrent PDE solver

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Van Rosendale, J.

    1986-01-01

    A parallel multigrid algorithm for solving elliptic partial differential equations is developed and evaluated. A V-cycle multigrid method is altered to increase the degree of parallelism. A numerical analysis of the resulting concurrent-iteration multigrid algorithm is performed; its architectural implications are considered; highly parallel systems without shared memory are examined (including mesh-connected arrays, mesh-shuffle-connected systems, permutation networks, and direct VLSI embeddings); and the results of numerical experiments are presented in tables and graphs.

  19. Parallel node placement method by bubble simulation

    NASA Astrophysics Data System (ADS)

    Nie, Yufeng; Zhang, Weiwei; Qi, Nan; Li, Yiqiang

    2014-03-01

    An efficient Parallel Node Placement method by Bubble Simulation (PNPBS), employing METIS-based domain decomposition (DD) for an arbitrary number of processors is introduced. In accordance with the desired nodal density and Newton’s Second Law of Motion, automatic generation of node sets by bubble simulation has been demonstrated in previous work. Since the interaction force between nodes is short-range, for two distant nodes, their positions and velocities can be updated simultaneously and independently during dynamic simulation, which indicates the inherent property of parallelism, it is quite suitable for parallel computing. In this PNPBS method, the METIS-based DD scheme has been investigated for uniform and non-uniform node sets, and dynamic load balancing is obtained by evenly distributing work among the processors. For the nodes near the common interface of two neighboring subdomains, there is no need for special treatment after dynamic simulation. These nodes have good geometrical properties and a smooth density distribution which is desirable in the numerical solution of partial differential equations (PDEs). The results of numerical examples show that quasi linear speedup in the number of processors and high efficiency are achieved.

  20. 33. Perimeter acquisition radar building room #320, perimeter acquisition radar ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    33. Perimeter acquisition radar building room #320, perimeter acquisition radar operations center (PAROC), contains the tactical command and control group equipment required to control the par site. Showing spacetrack monitor console - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  1. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  2. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  3. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  4. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  5. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  6. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  7. Unified Parallel Software

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  8. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  9. Data acquisition instruments: Psychopharmacology

    SciTech Connect

    Hartley, D.S. III

    1998-01-01

    This report contains the results of a Direct Assistance Project performed by Lockheed Martin Energy Systems, Inc., for Dr. K. O. Jobson. The purpose of the project was to perform preliminary analysis of the data acquisition instruments used in the field of psychiatry, with the goal of identifying commonalities of data and strategies for handling and using the data in the most advantageous fashion. Data acquisition instruments from 12 sources were provided by Dr. Jobson. Several commonalities were identified and a potentially useful data strategy is reported here. Analysis of the information collected for utility in performing diagnoses is recommended. In addition, further work is recommended to refine the commonalities into a directly useful computer systems structure.

  10. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  11. Advanced Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    Perotti, J.

    2003-01-01

    Current and future requirements of the aerospace sensors and transducers field make it necessary for the design and development of new data acquisition devices and instrumentation systems. New designs are sought to incorporate self-health, self-calibrating, self-repair capabilities, allowing greater measurement reliability and extended calibration cycles. With the addition of power management schemes, state-of-the-art data acquisition systems allow data to be processed and presented to the users with increased efficiency and accuracy. The design architecture presented in this paper displays an innovative approach to data acquisition systems. The design incorporates: electronic health self-check, device/system self-calibration, electronics and function self-repair, failure detection and prediction, and power management (reduced power consumption). These requirements are driven by the aerospace industry need to reduce operations and maintenance costs, to accelerate processing time and to provide reliable hardware with minimum costs. The project's design architecture incorporates some commercially available components identified during the market research investigation like: Field Programmable Gate Arrays (FPGA) Programmable Analog Integrated Circuits (PAC IC) and Field Programmable Analog Arrays (FPAA); Digital Signal Processing (DSP) electronic/system control and investigation of specific characteristics found in technologies like: Electronic Component Mean Time Between Failure (MTBF); and Radiation Hardened Component Availability. There are three main sections discussed in the design architecture presented in this document. They are the following: (a) Analog Signal Module Section, (b) Digital Signal/Control Module Section and (c) Power Management Module Section. These sections are discussed in detail in the following pages. This approach to data acquisition systems has resulted in the assignment of patent rights to Kennedy Space Center under U.S. patent # 6

  12. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Technology developed during a joint research program with Langley and Kinetic Systems Corporation led to Kinetic Systems' production of a high speed Computer Automated Measurement and Control (CAMAC) data acquisition system. The study, which involved the use of CAMAC equipment applied to flight simulation, significantly improved the company's technical capability and produced new applications. With Digital Equipment Corporation, Kinetic Systems is marketing the system to government and private companies for flight simulation, fusion research, turbine testing, steelmaking, etc.

  13. First Language Acquisition and Teaching

    ERIC Educational Resources Information Center

    Cruz-Ferreira, Madalena

    2011-01-01

    "First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

  14. Complexity in language acquisition.

    PubMed

    Clark, Alexander; Lappin, Shalom

    2013-01-01

    Learning theory has frequently been applied to language acquisition, but discussion has largely focused on information theoretic problems-in particular on the absence of direct negative evidence. Such arguments typically neglect the probabilistic nature of cognition and learning in general. We argue first that these arguments, and analyses based on them, suffer from a major flaw: they systematically conflate the hypothesis class and the learnable concept class. As a result, they do not allow one to draw significant conclusions about the learner. Second, we claim that the real problem for language learning is the computational complexity of constructing a hypothesis from input data. Studying this problem allows for a more direct approach to the object of study--the language acquisition device-rather than the learnable class of languages, which is epiphenomenal and possibly hard to characterize. The learnability results informed by complexity studies are much more insightful. They strongly suggest that target grammars need to be objective, in the sense that the primitive elements of these grammars are based on objectively definable properties of the language itself. These considerations support the view that language acquisition proceeds primarily through data-driven learning of some form.

  15. Second language acquisition.

    PubMed

    Juffs, Alan

    2011-05-01

    Second language acquisition (SLA) is a field that investigates child and adult SLA from a variety of theoretical perspectives. This article provides a survey of some key areas of concern including formal generative theory and emergentist theory in the areas of morpho-syntax and phonology. The review details the theoretical stance of the two different approaches to the nature of language: generative linguistics and general cognitive approaches. Some results of key acquisition studies from the two theoretical frameworks are discussed. From a generative perspective, constraints on wh-movement, feature geometry and syllable structure, and morphological development are highlighted. From a general cognitive point of view, the emergence of tense and aspect marking from a prototype account of inherent lexical aspect is reviewed. Reference is made to general cognitive learning theories and to sociocultural theory. The article also reviews individual differences research, specifically debate on the critical period in adult language acquisition, motivation, and memory. Finally, the article discusses the relationship between SLA research and second language pedagogy. Suggestions for further reading from recent handbooks on SLA are provided. WIREs Cogni Sci 2011 2 277-286 DOI: 10.1002/wcs.106 For further resources related to this article, please visit the WIREs website.

  16. Partially integrated exhaust manifold

    SciTech Connect

    Hayman, Alan W; Baker, Rodney E

    2015-01-20

    A partially integrated manifold assembly is disclosed which improves performance, reduces cost and provides efficient packaging of engine components. The partially integrated manifold assembly includes a first leg extending from a first port and terminating at a mounting flange for an exhaust gas control valve. Multiple additional legs (depending on the total number of cylinders) are integrally formed with the cylinder head assembly and extend from the ports of the associated cylinder and terminate at an exit port flange. These additional legs are longer than the first leg such that the exit port flange is spaced apart from the mounting flange. This configuration provides increased packaging space adjacent the first leg for any valving that may be required to control the direction and destination of exhaust flow in recirculation to an EGR valve or downstream to a catalytic converter.

  17. Partially coherent ultrafast spectrography

    PubMed Central

    Bourassin-Bouchet, C.; Couprie, M.-E.

    2015-01-01

    Modern ultrafast metrology relies on the postulate that the pulse to be measured is fully coherent, that is, that it can be completely described by its spectrum and spectral phase. However, synthesizing fully coherent pulses is not always possible in practice, especially in the domain of emerging ultrashort X-ray sources where temporal metrology is strongly needed. Here we demonstrate how frequency-resolved optical gating (FROG), the first and one of the most widespread techniques for pulse characterization, can be adapted to measure partially coherent pulses even down to the attosecond timescale. No modification of experimental apparatuses is required; only the processing of the measurement changes. To do so, we take our inspiration from other branches of physics where partial coherence is routinely dealt with, such as quantum optics and coherent diffractive imaging. This will have important and immediate applications, such as enabling the measurement of X-ray free-electron laser pulses despite timing jitter. PMID:25744080

  18. Laparoscopic partial splenic resection.

    PubMed

    Uranüs, S; Pfeifer, J; Schauer, C; Kronberger, L; Rabl, H; Ranftl, G; Hauser, H; Bahadori, K

    1995-04-01

    Twenty domestic pigs with an average weight of 30 kg were subjected to laparoscopic partial splenic resection with the aim of determining the feasibility, reliability, and safety of this procedure. Unlike the human spleen, the pig spleen is perpendicular to the body's long axis, and it is long and slender. The parenchyma was severed through the middle third, where the organ is thickest. An 18-mm trocar with a 60-mm Endopath linear cutter was used for the resection. The tissue was removed with a 33-mm trocar. The operation was successfully concluded in all animals. No capsule tears occurred as a result of applying the stapler. Optimal hemostasis was achieved on the resected edges in all animals. Although these findings cannot be extended to human surgery without reservations, we suggest that diagnostic partial resection and minor cyst resections are ideal initial indications for this minimally invasive approach.

  19. Partially coherent ultrafast spectrography

    NASA Astrophysics Data System (ADS)

    Bourassin-Bouchet, C.; Couprie, M.-E.

    2015-03-01

    Modern ultrafast metrology relies on the postulate that the pulse to be measured is fully coherent, that is, that it can be completely described by its spectrum and spectral phase. However, synthesizing fully coherent pulses is not always possible in practice, especially in the domain of emerging ultrashort X-ray sources where temporal metrology is strongly needed. Here we demonstrate how frequency-resolved optical gating (FROG), the first and one of the most widespread techniques for pulse characterization, can be adapted to measure partially coherent pulses even down to the attosecond timescale. No modification of experimental apparatuses is required; only the processing of the measurement changes. To do so, we take our inspiration from other branches of physics where partial coherence is routinely dealt with, such as quantum optics and coherent diffractive imaging. This will have important and immediate applications, such as enabling the measurement of X-ray free-electron laser pulses despite timing jitter.

  20. 2000-fold parallelized dual-color STED fluorescence nanoscopy.

    PubMed

    Bergermann, Fabian; Alber, Lucas; Sahl, Steffen J; Engelhardt, Johann; Hell, Stefan W

    2015-01-12

    Stimulated Emission Depletion (STED) nanoscopy enables multi-color fluorescence imaging at the nanometer scale. Its typical single-point scanning implementation can lead to long acquisition times. In order to unleash the full spatiotemporal resolution potential of STED nanoscopy, parallelized scanning is mandatory. Here we present a dual-color STED nanoscope utilizing two orthogonally crossed standing light waves as a fluorescence switch-off pattern, and providing a resolving power down to 30 nm. We demonstrate the imaging capabilities in a biological context for immunostained vimentin fibers in a circular field of view of 20 µm diameter at 2000-fold parallelization (i.e. 2000 "intensity minima"). The technical feasibility of massively parallelizing STED without significant compromises in resolution heralds video-rate STED nanoscopy of large fields of view, pending the availability of suitable high-speed detectors.

  1. The role of partial knowledge in statistical word learning.

    PubMed

    Yurovsky, Daniel; Fricker, Damian C; Yu, Chen; Smith, Linda B

    2014-02-01

    A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: partial knowledge of one word-object mapping can speed up the acquisition of other word-object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word-object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data.

  2. The role of partial knowledge in statistical word learning

    PubMed Central

    Fricker, Damian C.; Yu, Chen; Smith, Linda B.

    2013-01-01

    A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: Partial knowledge of one word–object mapping can speed up the acquisition of other word–object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word–object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data. PMID:23702980

  3. The role of partial knowledge in statistical word learning.

    PubMed

    Yurovsky, Daniel; Fricker, Damian C; Yu, Chen; Smith, Linda B

    2014-02-01

    A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: partial knowledge of one word-object mapping can speed up the acquisition of other word-object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word-object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data. PMID:23702980

  4. Frames of reference in spatial language acquisition.

    PubMed

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages. PMID:27423134

  5. Frames of reference in spatial language acquisition.

    PubMed

    Shusterman, Anna; Li, Peggy

    2016-08-01

    Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children's acquisition of such terms. In Part I, with five experiments, we contrasted children's acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire "left" and "right." In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned "left" and "right" when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world's languages.

  6. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  7. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  8. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  9. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  10. Trajectories in parallel optics.

    PubMed

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  11. A transputer-based list mode parallel system for digital radiography with 2D silicon detectors

    SciTech Connect

    Conti, M.; Russo, P.; Scarlatella, A. . Dipt. di Scienze Fisiche and INFN); Del Guerra, A. . Dipt. di Fisica and INFN); Mazzeo, A.; Mazzocca, N.; Russo, S. . Dipt. di Informatica e Sistemistica)

    1993-08-01

    The authors believe that a dedicated parallel computer system can represent an effective and flexible approach to the problem of list mode acquisition and reconstruction of digital radiographic images obtained with a double-sided silicon microstrip detector. They present a Transputer-based implementation of a parallel system for the data acquisition and image reconstruction from a silicon crystal with 200[mu]m read-out pitch. They are currently developing a prototype of the system connected to a detector with a 10mm[sup 2] sensitive area.

  12. 75 FR 51416 - Defense Federal Acquisition Regulation Supplement; Acquisition of Commercial Items

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-20

    ... Defense Acquisition Regulations System 48 CFR Parts 202, 212, and 234 Defense Federal Acquisition Regulation Supplement; Acquisition of Commercial Items AGENCY: Defense Acquisition Regulations System... interim rule that amended the Defense Federal Acquisition Regulation Supplement (DFARS) to...

  13. Melancholia and partial insanity.

    PubMed

    Jackson, S W

    1983-04-01

    In the medical literature of the eighteenth century melancholia came to be defined as partial insanity. Seventeenth-century English law introduced the term and influenced later forensic concerns about the concept. But the history of melancholia reveals a gradual development of such a concept of limited derangement associated with the delusions usually cited in accounts of this disease. In the early nineteenth century the relationship of melancholia and this concept weakened and was gradually abandoned, the content of the syndrome of melancholia was reduced, and out of this complex process emerged the notion of monomania.

  14. Esthetic removable partial dentures.

    PubMed

    Ancowitz, Stephen

    2004-01-01

    This article provides information regarding the many ways that removable partial dentures (RPDs) may be used to solve restorative problems in the esthetic zone without displaying metal components or conspicuous acrylic resin flanges. The esthetic zone is defined and described, as are methods for recording it. Six dental categories are presented that assist the dentist in choosing a variety of RPD design concepts that may be used to avoid metal display while still satisfying basic principles of RPDs. New materials that may be utilized for optimal esthetics are presented and techniques for contouring acrylic resin bases and tinting denture bases are described.

  15. Smart Acquisition EELS

    NASA Astrophysics Data System (ADS)

    Sader, K.; Schaffer, B.; Vaughan, G.; Wang, P.; Bleloch, A. L.; Brydson, R.; Brown, A.

    2010-07-01

    Electron energy loss (EEL) spectroscopy and high angle annular dark field (HAADF) imaging in aberration-corrected electron microscopes are powerful techniques to determine the chemical composition and structure of materials at atomic resolution. We have implemented Smart Acquisition, a flexible system of scanning transmission electron microsocpy (STEM) beam position control and EELS collection, on two aberration-corrected dedicated cold field emission gun (FEG) STEMs located at SuperSTEM, Daresbury Laboratory. This allows the collection of EEL spectra from spatially defined areas with a much lower electron dose possible than existing techniques such as spectrum imaging.

  16. Late Mitochondrial Acquisition, Really?

    PubMed Central

    Degli Esposti, Mauro

    2016-01-01

    This article provides a timely critique of a recent Nature paper by Pittis and Gabaldón that has suggested a late origin of mitochondria in eukaryote evolution. It shows that the inferred ancestry of many mitochondrial proteins has been incorrectly assigned by Pittis and Gabaldón to bacteria other than the aerobic proteobacteria from which the ancestor of mitochondria originates, thereby questioning the validity of their suggestion that mitochondrial acquisition may be a late event in eukaryote evolution. The analysis and approach presented here may guide future studies to resolve the true ancestry of mitochondria. PMID:27289097

  17. Data acquisition system

    DOEpatents

    Phillips, David T.

    1979-01-01

    A data acquisition system capable of resolving transient pulses in the subnanosecond range. A pulse in an information carrying medium such as light is transmitted through means which disperse the pulse, such as a fiber optic light guide which time-stretches optical pulses by chromatic dispersion. This time-stretched pulse is used as a sampling pulse and is modulated by the signal to be recorded. The modulated pulse may be further time-stretched prior to being recorded. The recorded modulated pulse is unfolded to derive the transient signal by utilizing the relationship of the time-stretching that occurred in the original pulse.

  18. Acquisition-Management Program

    NASA Technical Reports Server (NTRS)

    Avery, Don E.; Vann, A. Vernon; Jones, Richard H.; Rew, William E.

    1987-01-01

    NASA Acquisition Management Subsystem (AMS) program integrated NASA-wide standard automated-procurement-system program developed in 1985. Designed to provide each NASA installation with procurement data-base concept with on-line terminals for managing, tracking, reporting, and controlling contractual actions and associated procurement data. Subsystem provides control, status, and reporting for various procurement areas. Purpose of standardization is to decrease costs of procurement and operation of automatic data processing; increases procurement productivity; furnishes accurate, on-line management information and improves customer support. Written in the ADABAS NATURAL.

  19. First language acquisition.

    PubMed

    Goodluck, Helen

    2011-01-01

    This article reviews current approaches to first language acquisition, arguing in favor of the theory that attributes to the child an innate knowledge of universal grammar. Such knowledge can accommodate the systematic nature of children's non-adult linguistic behaviors. The relationships between performance devices (mechanisms for comprehension and production of speech), non-linguistic aspects of cognition, and child grammars are also discussed. WIREs Cogn Sci 2011 2 47-54 DOI: 10.1002/wcs.95 For further resources related to this article, please visit the WIREs website.

  20. Experts' Understanding of Partial Derivatives Using the Partial Derivative Machine

    ERIC Educational Resources Information Center

    Roundy, David; Weber, Eric; Dray, Tevian; Bajracharya, Rabindra R.; Dorko, Allison; Smith, Emily M.; Manogue, Corinne A.

    2015-01-01

    Partial derivatives are used in a variety of different ways within physics. Thermodynamics, in particular, uses partial derivatives in ways that students often find especially confusing. We are at the beginning of a study of the teaching of partial derivatives, with a goal of better aligning the teaching of multivariable calculus with the needs of…

  1. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  2. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  3. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  4. 48 CFR 18.113 - Interagency acquisitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Interagency acquisitions. 18.113 Section 18.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Available Acquisition Flexibilities...

  5. 48 CFR 1034.004 - Acquisition strategy.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... “lock in”. (b) The acquisition strategy shall be approved by a chartered interdisciplinary acquisition... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Acquisition strategy. 1034... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 1034.004 Acquisition strategy. (a) A...

  6. 48 CFR 1034.004 - Acquisition strategy.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... “lock in”. (b) The acquisition strategy shall be approved by a chartered interdisciplinary acquisition... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Acquisition strategy. 1034... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 1034.004 Acquisition strategy. (a) A...

  7. Is Titan Partially Differentiated?

    NASA Astrophysics Data System (ADS)

    Mitri, G.; Pappalardo, R. T.; Stevenson, D. J.

    2009-12-01

    The recent measurement of the gravity coefficients from the Radio Doppler data of the Cassini spacecraft has improved our knowledge of the interior structure of Titan (Rappaport et al. 2008 AGU, P21A-1343). The measured gravity field of Titan is dominated by near hydrostatic quadrupole components. We have used the measured gravitational coefficients, thermal models and the hydrostatic equilibrium theory to derive Titan's interior structure. The axial moment of inertia gives us an indication of the degree of the interior differentiation. The inferred axial moment of inertia, calculated using the quadrupole gravitational coefficients and the Radau-Darwin approximation, indicates that Titan is partially differentiated. If Titan is partially differentiated then the interior must avoid melting of the ice during its evolution. This suggests a relatively late formation of Titan to avoid the presence of short-lived radioisotopes (Al-26). This also suggests the onset of convection after accretion to efficiently remove the heat from the interior. The outer layer is likely composed mainly of water in solid phase. Thermal modeling indicates that water could be present also in liquid phase forming a subsurface ocean between an outer ice I shell and a high pressure ice layer. Acknowledgments: This work was conducted at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  8. Partial Triceps Disruption

    PubMed Central

    Foulk, David M.; Galloway, Marc T.

    2011-01-01

    Partial triceps tendon disruptions are a rare injury that can lead to debilitating outcomes if misdiagnosed or managed inappropriately. The clinician should have a high index of suspicion when the mechanism involves a fall onto an outstretched arm and there is resultant elbow extension weakness along with pain and swelling. The most common location of rupture is at the tendon-osseous junction. This case report illustrates a partial triceps tendon disruption with involvement of, primarily, the medial head and the superficial expansion. Physical examination displayed weakness with resisted elbow extension in a flexed position over 90°. Radiographs revealed a tiny fleck of bone proximal to the olecranon, but this drastically underestimated the extent of injury upon surgical exploration. Magnetic resonance imaging is essential to ascertain the percentage involvement of the tendon; it can be used for patient education and subsequently to determine treatment recommendations. Although excellent at finding associated pathology, it may misjudge the size of the tear. As such, physicians must consider associated comorbidities and patient characteristics when formulating treatment plans. PMID:23016005

  9. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  10. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  11. Data-acquisition systems

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    Up to now, DAPHNE, the data-acquisition system developed for ATLAS, was used routinely for experiments at ATLAS and the Dynamitron. More recently, the Division implemented 2 MSU/DAPHNE systems. The MSU/DAPHNE system is a hybrid data-acquisition system which combines the front-end of the Michigan State University (MSU) DA system with the traditional DAPHNE back-end. The MSU front-end is based on commercially available modules. This alleviates the problems encountered with the DAPHNE front-end which is based on custom designed electronics. The first MSU system was obtained for the APEX experiment and was used there successfully. A second MSU front-end, purchased as a backup for the APEX experiment, was installed as a fully-independent second MSU/DAPHNE system with the procurement of a DEC 3000 Alpha host computer, and was used successfully for data-taking in an experiment at ATLAS. Additional hardware for a third system was bought and will be installed. With the availability of 2 MSU/DAPHNE systems in addition to the existing APEX setup, it is planned that the existing DAPHNE front-end will be decommissioned.

  12. Unsupervised Language Acquisition

    NASA Astrophysics Data System (ADS)

    de Marcken, Carl

    1996-11-01

    This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.

  13. Problem size, parallel architecture and optimal speedup

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Willard, Frank H.

    1987-01-01

    The communication and synchronization overhead inherent in parallel processing can lead to situations where adding processors to the solution method actually increases execution time. Problem type, problem size, and architecture type all affect the optimal number of processors to employ. The numerical solution of an elliptic partial differential equation is examined in order to study the relationship between problem size and architecture. The equation's domain is discretized into n sup 2 grid points which are divided into partitions and mapped onto the individual processor memories. The relationships between grid size, stencil type, partitioning strategy, processor execution time, and communication network type are analytically quantified. In so doing, the optimal number of processors was determined to assign to the solution, and identified (1) the smallest grid size which fully benefits from using all available processors, (2) the leverage on performance given by increasing processor speed or communication network speed, and (3) the suitability of various architectures for large numerical problems.

  14. Problem size, parallel architecture, and optimal speedup

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Willard, Frank H.

    1988-01-01

    The communication and synchronization overhead inherent in parallel processing can lead to situations where adding processors to the solution method actually increases execution time. Problem type, problem size, and architecture type all affect the optimal number of processors to employ. The numerical solution of an elliptic partial differential equation is examined in order to study the relationship between problem size and architecture. The equation's domain is discretized into n sup 2 grid points which are divided into partitions and mapped onto the individual processor memories. The relationships between grid size, stencil type, partitioning strategy, processor execution time, and communication network type are analytically quantified. In so doing, the optimal number of processors was determined to assign to the solution, and identified (1) the smallest grid size which fully benefits from using all available processors, (2) the leverage on performance given by increasing processor speed or communication network speed, and (3) the suitability of various architectures for large numerical problems.

  15. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-01

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  16. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    SciTech Connect

    2015-02-19

    ParFELAG is a parallel distributed memory C++ library for numerical upscaling of finite element discretizations. It provides optimal complesity algorithms ro build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured mesh (under the assumption that the topology of the agglomerated entities is correct). Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  17. A 3D-elastography-guided system for laparoscopic partial nephrectomies

    NASA Astrophysics Data System (ADS)

    Stolka, Philipp J.; Keil, Matthias; Sakas, Georgios; McVeigh, Elliot; Allaf, Mohamad E.; Taylor, Russell H.; Boctor, Emad M.

    2010-02-01

    We present an image-guided intervention system based on tracked 3D elasticity imaging (EI) to provide a novel interventional modality for registration with pre-operative CT. The system can be integrated in both laparoscopic and robotic partial nephrectomies scenarios, where this new use of EI makes exact intra-operative execution of pre-operative planning possible. Quick acquisition and registration of 3D-B-Mode and 3D-EI volume data allows intra-operative registration with CT and thus with pre-defined target and critical regions (e.g. tumors and vasculature). Their real-time location information is then overlaid onto a tracked endoscopic video stream to help the surgeon avoid vessel damage and still completely resect tumors including safety boundaries. The presented system promises to increase the success rate for partial nephrectomies and potentially for a wide range of other laparoscopic and robotic soft tissue interventions. This is enabled by the three components of robust real-time elastography, fast 3D-EI/CT registration, and intra-operative tracking. With high quality, robust strain imaging (through a combination of parallelized 2D-EI, optimal frame pair selection, and optimized palpation motions), kidney tumors that were previously unregistrable or sometimes even considered isoechoic with conventional B-mode ultrasound can now be imaged reliably in interventional settings. Furthermore, this allows the transformation of planning CT data of kidney ROIs to the intra-operative setting with a markerless mutual-information-based registration, using EM sensors for intraoperative motion tracking. Overall, we present a complete procedure and its development, including new phantom models - both ex vivo and synthetic - to validate image-guided technology and training, tracked elasticity imaging, real-time EI frame selection, registration of CT with EI, and finally a real-time, distributed software architecture. Together, the system allows the surgeon to concentrate

  18. Partially segmented deformable mirror

    DOEpatents

    Bliss, E.S.; Smith, J.R.; Salmon, J.T.; Monjes, J.A.

    1991-05-21

    A partially segmented deformable mirror is formed with a mirror plate having a smooth and continuous front surface and a plurality of actuators to its back surface. The back surface is divided into triangular areas which are mutually separated by grooves. The grooves are deep enough to make the plate deformable and the actuators for displacing the mirror plate in the direction normal to its surface are inserted in the grooves at the vertices of the triangular areas. Each actuator includes a transducer supported by a receptacle with outer shells having outer surfaces. The vertices have inner walls which are approximately perpendicular to the mirror surface and make planar contacts with the outer surfaces of the outer shells. The adhesive which is used on these contact surfaces tends to contract when it dries but the outer shells can bend and serve to minimize the tendency of the mirror to warp. 5 figures.

  19. Partially segmented deformable mirror

    DOEpatents

    Bliss, Erlan S.; Smith, James R.; Salmon, J. Thaddeus; Monjes, Julio A.

    1991-01-01

    A partially segmented deformable mirror is formed with a mirror plate having a smooth and continuous front surface and a plurality of actuators to its back surface. The back surface is divided into triangular areas which are mutually separated by grooves. The grooves are deep enough to make the plate deformable and the actuators for displacing the mirror plate in the direction normal to its surface are inserted in the grooves at the vertices of the triangular areas. Each actuator includes a transducer supported by a receptacle with outer shells having outer surfaces. The vertices have inner walls which are approximately perpendicular to the mirror surface and make planar contacts with the outer surfaces of the outer shells. The adhesive which is used on these contact surfaces tends to contract when it dries but the outer shells can bend and serve to minimize the tendency of the mirror to warp.

  20. Partial oxidation catalyst

    DOEpatents

    Krumpelt, Michael; Ahmed, Shabbir; Kumar, Romesh; Doshi, Rajiv

    2000-01-01

    A two-part catalyst comprising a dehydrogenation portion and an oxide-ion conducting portion. The dehydrogenation portion is a group VIII metal and the oxide-ion conducting portion is selected from a ceramic oxide crystallizing in the fluorite or perovskite structure. There is also disclosed a method of forming a hydrogen rich gas from a source of hydrocarbon fuel in which the hydrocarbon fuel contacts a two-part catalyst comprising a dehydrogenation portion and an oxide-ion conducting portion at a temperature not less than about 400.degree. C. for a time sufficient to generate the hydrogen rich gas while maintaining CO content less than about 5 volume percent. There is also disclosed a method of forming partially oxidized hydrocarbons from ethanes in which ethane gas contacts a two-part catalyst comprising a dehydrogenation portion and an oxide-ion conducting portion for a time and at a temperature sufficient to form an oxide.

  1. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  2. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  3. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  4. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  5. SNAP: Simulating New Acquisition Processes

    NASA Technical Reports Server (NTRS)

    Alfeld, Louis E.

    1997-01-01

    Simulation models of acquisition processes range in scope from isolated applications to the 'Big Picture' captured by SNAP technology. SNAP integrates a family of models to portray the full scope of acquisition planning and management activities, including budgeting, scheduling, testing and risk analysis. SNAP replicates the dynamic management processes that underlie design, production and life-cycle support. SNAP provides the unique 'Big Picture' capability needed to simulate the entire acquisition process and explore the 'what-if' tradeoffs and consequences of alternative policies and decisions. Comparison of cost, schedule and performance tradeoffs help managers choose the lowest-risk, highest payoff at each step in the acquisition process.

  6. Solving unstructured grid problems on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1990-01-01

    A highly parallel graph mapping technique that enables one to efficiently solve unstructured grid problems on massively parallel computers is presented. Many implicit and explicit methods for solving discretized partial differential equations require each point in the discretization to exchange data with its neighboring points every time step or iteration. The cost of this communication can negate the high performance promised by massively parallel computing. To eliminate this bottleneck, the graph of the irregular problem is mapped into the graph representing the interconnection topology of the computer such that the sum of the distances that the messages travel is minimized. It is shown that using the heuristic mapping algorithm significantly reduces the communication time compared to a naive assignment of processes to processors.

  7. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilties to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  8. Automating the parallel processing of fluid and structural dynamics calculations

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  9. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  10. Plasmid acquisition in microgravity

    NASA Technical Reports Server (NTRS)

    Juergensmeyer, Margaret A.; Juergensmeyer, Elizabeth A.; Guikema, James A.

    1995-01-01

    In microgravity, bacteria often show an increased resistance to antibiotics. Bacteria can develop resistance to an antibiotic after transformation, the acquisition of DNA, usually in the form of a plasmid containing a gene for resistance to one or more antibiotics. In order to study the capacity of bacteria to become resistant to antibiotics in microgravity, we have modified the standard protocol for transformation of Escherichia coli for use in the NASA-flight-certified hardware package, The Fluid Processing Apparatus (FPA). Here we report on the ability of E. coli to remain competent for long periods of time at temperatures that are readily available on the Space Shuttle, and present some preliminary flight results.

  11. Silicon tracker data acquisition

    SciTech Connect

    Haynes, W.J.

    1997-12-31

    Large particle physics experiments are making increasing technological demands on the design and implementation of real-time data acquisition systems. The LHC will have bunch crossing intervals of 25 nanoseconds and detectors, such as CMS, will contain over 10 million electronic channels. Readout systems will need to cope with 100 kHz rates of 1 MByte-sized events. Over 70% of this voluminous flow will stem from silicon tracker and MSGC devices. This paper describes the techniques currently being harnessed from ASIC devices through to modular microprocessor-based architectures around standards such as VMEbus and PCI. In particular, the experiences gained at the HERA H1 experiment are highlighted where many of the key technological concepts have already been im implemented.

  12. DSPS in data acquisition

    SciTech Connect

    Kirsch, M.; Haeupke, T.; Oelschlaeger, R.; Struck, B.

    1997-12-31

    Off-the-shelf and customized DSP boards in different bus architectures are perfectly suited to act as building blocks for flexible and high performance data acquisition systems. Due to their architecture they can be used to enhance the performance of existing equipment as add ons, as state-of-the-art readout controllers, event builders, on-the-fly data formatters and higher level trigger processors. Applications covering the above mentioned fields with Motorolas 96002 HARC DSP in the DESY HERMES and H1 experiments, at the focal plane polarimeter at KVI and the NIST high flux neutron backscattering spectrometer will be presented. Future possibilities with VME, PCI and PMC boards based on Analog Devices SHARC DSP will be discussed. Systems on the base of Texas Instruments TMS320C6X promise to provide unprecedented performance.

  13. AIROscope stellar acquisition

    NASA Technical Reports Server (NTRS)

    Deboo, G. J.; Parra, G. T.; Hedlund, R. C.

    1974-01-01

    The acquisition system which operates in conjunction with a balloon-borne TV system, boresighted to a telescope is described. It has two main functions, a star field monitor and an offset star tracker. The design of the system was strongly influenced by the TV camera, which uses the same interlaced scanning system as is employed in commercial television broadcasting. To reduce power and bandwidth requirements, the star field information transmitted in our system consists only of the horizontal and vertical coordinates of each star and its brightness. As a star field monitor the system provides video thresholding, camera blemish suppression, coordinate digitization in 3 axes, circuity to recognize as single star the dispersed video signals resulting from one star overlapping adjacent scanning lines and storage of all signals for readout by the telemetry at appropriate times.

  14. Experimental Parallel-Processing Computer

    NASA Technical Reports Server (NTRS)

    Mcgregor, J. W.; Salama, M. A.

    1986-01-01

    Master processor supervises slave processors, each with its own memory. Computer with parallel processing serves as inexpensive tool for experimentation with parallel mathematical algorithms. Speed enhancement obtained depends on both nature of problem and structure of algorithm used. In parallel-processing architecture, "bank select" and control signals determine which one, if any, of N slave processor memories accessible to master processor at any given moment. When so selected, slave memory operates as part of master computer memory. When not selected, slave memory operates independently of main memory. Slave processors communicate with each other via input/output bus.

  15. Theories of language acquisition.

    PubMed

    Vetter, H J; Howell, R W

    1971-03-01

    Prior to the advent of generative grammar, theoretical approaches to language development relied heavily upon the concepts ofdifferential reinforcement andimitation. Current studies of linguistic acquisition are largely dominated by the hypothesis that the child constructs his language on the basis of a primitive grammar which gradually evolves into a more complex grammar. This approach presupposes that the investigator does not impose his own grammatical rules on the utterances of the child; that the sound system of the child and the rules he employs to form sentences are to be described in their own terms, independently of the model provided by the adult linguistic community; and that there is a series of steps or stages through which the child passes on his way toward mastery of the adult grammar in his linguistic environment. This paper attempts to trace the development of human vocalization through prelinguistic stages to the development of what can be clearly recognized as language behavior, and then progresses to transitional phases in which the language of the child begins to approximate that of the adult model. In the view of the authors, the most challenging problems which confront theories of linguistic acquisition arise in seeking to account for structure of sound sequences, in the rules that enable the speaker to go from meaning to sound and which enable the listener to go from sound to meaning. The principal area of concern for the investigator, according to the authors, is the discovery of those rules at various stages of the learning process. The paper concludes with a return to the question of what constitutes an adequate theory of language ontogenesis. It is suggested that such a theory will have to be keyed to theories of cognitive development and will have to include and go beyond a theory which accounts for adult language competence and performance, since these represent only the terminal stage of linguistic ontogenesis.

  16. Robot-assisted partial nephrectomy: Superiority over laparoscopic partial nephrectomy.

    PubMed

    Shiroki, Ryoichi; Fukami, Naohiko; Fukaya, Kosuke; Kusaka, Mamoru; Natsume, Takahiro; Ichihara, Takashi; Toyama, Hiroshi

    2016-02-01

    Nephron-sparing surgery has been proven to positively impact the postoperative quality of life for the treatment of small renal tumors, possibly leading to functional improvements. Laparoscopic partial nephrectomy is still one of the most demanding procedures in urological surgery. Laparoscopic partial nephrectomy sometimes results in extended warm ischemic time and severe complications, such as open conversion, postoperative hemorrhage and urine leakage. Robot-assisted partial nephrectomy exploits the advantages offered by the da Vinci Surgical System to laparoscopic partial nephrectomy, equipped with 3-D vision and a better degree in the freedom of surgical instruments. The introduction of the da Vinci Surgical System made nephron-sparing surgery, specifically robot-assisted partial nephrectomy, safe with promising results, leading to the shortening of warm ischemic time and a reduction in perioperative complications. Even for complex and challenging tumors, robotic assistance is expected to provide the benefit of minimally-invasive surgery with safe and satisfactory renal function. Warm ischemic time is the modifiable factor during robot-assisted partial nephrectomy to affect postoperative kidney function. We analyzed the predictive factors for extended warm ischemic time from our robot-assisted partial nephrectomy series. The surface area of the tumor attached to the kidney parenchyma was shown to significantly affect the extended warm ischemic time during robot-assisted partial nephrectomy. In cases with tumor-attached surface area more than 15 cm(2) , we should consider switching robot-assisted partial nephrectomy to open partial nephrectomy under cold ischemia if it is imperative. In Japan, a nationwide prospective study has been carried out to show the superiority of robot-assisted partial nephrectomy to laparoscopic partial nephrectomy in improving warm ischemic time and complications. By facilitating robotic technology, robot-assisted partial nephrectomy

  17. 78 FR 49990 - Land Acquisitions: Appeals of Land Acquisition Decisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-16

    ...: Comments on the proposed rule published May 29, 2013 (78 FR 32214) must be received by September 3, 2013... 25 CFR 151.12 (78 FR 32214). The proposed rule would remove procedural requirements that are no...; Docket ID: BIA-2013-0005] RIN 1076-AF15 Land Acquisitions: Appeals of Land Acquisition Decisions...

  18. Lexical Acquisition and Acquisition of Initial Voiceless Stops.

    ERIC Educational Resources Information Center

    Tyler, Ann A.; Edwards, Mary Louise

    1993-01-01

    Interaction between lexical acquisition and acquisition of initial voiceless stops (VSs) was studied in two normally developing children by acoustically examining token-by-token accuracy of initial VS targets in different lexical items. Tokens representing the emergence of accurate VS production were restricted to certain words, largely old words…

  19. A Data Parallel Algorithm for XML DOM Parsing

    NASA Astrophysics Data System (ADS)

    Shah, Bhavik; Rao, Praveen R.; Moon, Bongki; Rajagopalan, Mohan

    The extensible markup language XML has become the de facto standard for information representation and interchange on the Internet. XML parsing is a core operation performed on an XML document for it to be accessed and manipulated. This operation is known to cause performance bottlenecks in applications and systems that process large volumes of XML data. We believe that parallelism is a natural way to boost performance. Leveraging multicore processors can offer a cost-effective solution, because future multicore processors will support hundreds of cores, and will offer a high degree of parallelism in hardware. We propose a data parallel algorithm called ParDOM for XML DOM parsing, that builds an in-memory tree structure for an XML document. ParDOM has two phases. In the first phase, an XML document is partitioned into chunks and parsed in parallel. In the second phase, partial DOM node tree structures created during the first phase, are linked together (in parallel) to build a complete DOM node tree. ParDOM offers fine-grained parallelism by adopting a flexible chunking scheme - each chunk can contain an arbitrary number of start and end XML tags that are not necessarily matched. ParDOM can be conveniently implemented using a data parallel programming model that supports map and sort operations. Through empirical evaluation, we show that ParDOM yields better scalability than PXP [23] - a recently proposed parallel DOM parsing algorithm - on commodity multicore processors. Furthermore, ParDOM can process a wide-variety of XML datasets with complex structures which PXP fails to parse.

  20. Modeling the distinct phases of skill acquisition.

    PubMed

    Tenison, Caitlin; Anderson, John R

    2016-05-01

    A focus of early mathematics education is to build fluency through practice. Several models of skill acquisition have sought to explain the increase in fluency because of practice by modeling both the learning mechanisms driving this speedup and the changes in cognitive processes involved in executing the skill (such as transitioning from calculation to retrieval). In the current study, we use hidden Markov modeling to identify transitions in the learning process. This method accounts for the gradual speedup in problem solving and also uncovers abrupt changes in reaction time, which reflect changes in the cognitive processes that participants are using to solve math problems. We find that as participants practice solving math problems they transition through 3 distinct learning states. Each learning state shows some speedup with practice, but the major speedups are produced by transitions between learning states. In examining and comparing the behavioral and neurological profiles of each of these states, we find parallels with the 3 phases of skill acquisition proposed by Fitts and Posner (1967): a cognitive, an associative, and an autonomous phase. (PsycINFO Database Record PMID:26551626

  1. Modeling the distinct phases of skill acquisition.

    PubMed

    Tenison, Caitlin; Anderson, John R

    2016-05-01

    A focus of early mathematics education is to build fluency through practice. Several models of skill acquisition have sought to explain the increase in fluency because of practice by modeling both the learning mechanisms driving this speedup and the changes in cognitive processes involved in executing the skill (such as transitioning from calculation to retrieval). In the current study, we use hidden Markov modeling to identify transitions in the learning process. This method accounts for the gradual speedup in problem solving and also uncovers abrupt changes in reaction time, which reflect changes in the cognitive processes that participants are using to solve math problems. We find that as participants practice solving math problems they transition through 3 distinct learning states. Each learning state shows some speedup with practice, but the major speedups are produced by transitions between learning states. In examining and comparing the behavioral and neurological profiles of each of these states, we find parallels with the 3 phases of skill acquisition proposed by Fitts and Posner (1967): a cognitive, an associative, and an autonomous phase. (PsycINFO Database Record

  2. The DISTO data acquisition system at SATURNE

    SciTech Connect

    Balestra, F. |; Bedfer, Y.; Bertini, R. ||

    1998-06-01

    The DISTO collaboration has built a large-acceptance magnetic spectrometer designed to provide broad kinematic coverage of multiparticle final states produced in pp scattering. The spectrometer has been installed in the polarized proton beam of the Saturne accelerator in Saclay to study polarization observables in the {rvec p}p {yields} pK{sup +}{rvec Y} (Y = {Lambda}, {Sigma}{sup 0} or Y{sup *}) reaction and vector meson production ({psi}, {omega} and {rho}) in pp collisions. The data acquisition system is based on a VME 68030 CPU running the OS/9 operating system, housed in a single VME crate together with the CAMAC interface, the triple port ECL memories, and four RISC R3000 CPU. The digitization of signals from the detectors is made by PCOS III and FERA front-end electronics. Data of several events belonging to a single Saturne extraction are stored in VME triple-port ECL memories using a hardwired fast sequencer. The buffer, optionally filtered by the RISC R3000 CPU, is recorded on a DLT cassette by DAQ CPU using the on-board SCSI interface during the acceleration cycle. Two UNIX workstations are connected to the VME CPUs through a fast parallel bus and the Local Area Network. They analyze a subset of events for on-line monitoring. The data acquisition system is able to read and record 3,500 ev/burst in the present configuration with a dead time of 15%.

  3. Updated NGNP Fuel Acquisition Strategy

    SciTech Connect

    David Petti; Tim Abram; Richard Hobbins; Jim Kendall

    2010-12-01

    A Next Generation Nuclear Plant (NGNP) fuel acquisition strategy was first established in 2007. In that report, a detailed technical assessment of potential fuel vendors for the first core of NGNP was conducted by an independent group of international experts based on input from the three major reactor vendor teams. Part of the assessment included an evaluation of the credibility of each option, along with a cost and schedule to implement each strategy compared with the schedule and throughput needs of the NGNP project. While credible options were identified based on the conditions in place at the time, many changes in the assumptions underlying the strategy and in externalities that have occurred in the interim requiring that the options be re-evaluated. This document presents an update to that strategy based on current capabilities for fuel fabrication as well as fuel performance and qualification testing worldwide. In light of the recent Pebble Bed Modular Reactor (PBMR) project closure, the Advanced Gas Reactor (AGR) fuel development and qualification program needs to support both pebble and prismatic options under the NGNP project. A number of assumptions were established that formed a context for the evaluation. Of these, the most important are: • Based on logistics associated with the on-going engineering design activities, vendor teams would start preliminary design in October 2012 and complete in May 2014. A decision on reactor type will be made following preliminary design, with the decision process assumed to be completed in January 2015. Thus, no fuel decision (pebble or prismatic) will be made in the near term. • Activities necessary for both pebble and prismatic fuel qualification will be conducted in parallel until a fuel form selection is made. As such, process development, fuel fabrication, irradiation, and testing for pebble and prismatic options should not negatively influence each other during the period prior to a decision on reactor type

  4. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  5. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  6. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  7. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  8. Metal structures with parallel pores

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  9. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  10. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  11. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  12. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical

  13. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  14. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  15. New online signature acquisition system

    NASA Astrophysics Data System (ADS)

    Oulefki, Adel; Mostefai, Messaoud; Abbadi, Belkacem; Djebrani, Samira; Bouziane, Abderraouf; Chahir, Youssef

    2013-01-01

    We present a nonconstraining and low-cost online signature acquisition system that has been developed to enhance the performances of an existing multimodal biometric authentication system (based initially on both voice and image modalities). A laboratory prototype has been developed and validated for an online signature acquisition.

  16. Language Acquisition, Pidgins and Creoles.

    ERIC Educational Resources Information Center

    Wode, Henning

    1981-01-01

    Suggests that structural universals between different-based pidgins result from universal linguo-cognitive processing strategies which are employed in learning languages. Some of the strategies occur in all types of acquisition, and others are more applicable to L2 type acquisition. Past research is discussed, and some exemplary data are given.…

  17. Handbook of Child Language Acquisition.

    ERIC Educational Resources Information Center

    Ritchie, William C., Ed.; Bhatia, Tej K., Ed.

    This volume provides a comprehensive overview of the major areas of research in the field of child language acquisition. It is divided into seven parts and 19 chapters. Part I is an introduction and overview. Part II covers central issues in the study of child language acquisition, focusing on syntax, including those of innateness, maturation, and…

  18. Knowledge Acquisition in Observational Astronomy.

    ERIC Educational Resources Information Center

    Vosniadou, Stella

    This paper presents findings from research on knowledge acquisition in observational astronomy to demonstrate the kinds of intuitive models children form and to show how these models influence the acquisition of science knowledge. Sixty children of approximate ages 6, 9, and 12 were given a questionnaire to investigate their knowledge of the size,…

  19. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  20. Symbiont acquisition strategy drives host-symbiont associations in the southern Great Barrier Reef

    NASA Astrophysics Data System (ADS)

    Stat, M.; Loh, W. K. W.; Hoegh-Guldberg, O.; Carter, D. A.

    2008-12-01

    Coral larvae acquire populations of the symbiotic dinoflagellate Symbiodinium from the external environment (horizontal acquisition) or inherit their symbionts from the parent colony (maternal or vertical acquisition). The effect of the symbiont acquisition strategy on Symbiodinium-host associations has not been fully resolved. Previous studies have provided mixed results, probably due to factors such as low sample replication of Symbiodinium from a single coral host, biogeographic differences in Symbiodinium diversity, and the presence of some apparently host-specific symbiont lineages in coral with either symbiont acquisition strategies. This study set out to assess the effect of the symbiont acquisition strategy by sampling Symbiodinium from 10 coral species (five with a horizontal and five with a vertical symbiont acquisition strategy) across two adjacent reefs in the southern Great Barrier Reef. Symbiodinium diversity was assessed using single-stranded conformational polymorphism of partial nuclear large subunit rDNA and denaturing gradient gel electrophoresis of the internal transcribed spacer 2 region. The Symbiodinium population in hosts with a vertical symbiont acquisition strategy partitioned according to coral species, while hosts with a horizontal symbiont acquisition strategy shared a common symbiont type across the two reef environments. Comparative analysis of existing data from the southern Great Barrier Reef found that the majority of corals with a vertical symbiont acquisition strategy associated with distinct species- or genus-specific Symbiodinium lineages, but some could also associate with symbiont types that were more commonly found in hosts with a horizontal symbiont acquisition strategy.

  1. Fast-Acquisition/Weak-Signal-Tracking GPS Receiver for HEO

    NASA Technical Reports Server (NTRS)

    Wintemitz, Luke; Boegner, Greg; Sirotzky, Steve

    2004-01-01

    A report discusses the technical background and design of the Navigator Global Positioning System (GPS) receiver -- . a radiation-hardened receiver intended for use aboard spacecraft. Navigator is capable of weak signal acquisition and tracking as well as much faster acquisition of strong or weak signals with no a priori knowledge or external aiding. Weak-signal acquisition and tracking enables GPS use in high Earth orbits (HEO), and fast acquisition allows for the receiver to remain without power until needed in any orbit. Signal acquisition and signal tracking are, respectively, the processes of finding and demodulating a signal. Acquisition is the more computationally difficult process. Previous GPS receivers employ the method of sequentially searching the two-dimensional signal parameter space (code phase and Doppler). Navigator exploits properties of the Fourier transform in a massively parallel search for the GPS signal. This method results in far faster acquisition times [in the lab, 12 GPS satellites have been acquired with no a priori knowledge in a Low-Earth-Orbit (LEO) scenario in less than one second]. Modeling has shown that Navigator will be capable of acquiring signals down to 25 dB-Hz, appropriate for HEO missions. Navigator is built using the radiation-hardened ColdFire microprocessor and housing the most computationally intense functions in dedicated field-programmable gate arrays. The high performance of the algorithm and of the receiver as a whole are made possible by optimizing computational efficiency and carefully weighing tradeoffs among the sampling rate, data format, and data-path bit width.

  2. 48 CFR 304.7001 - Numbering acquisitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Numbering acquisitions. 304.7001 Section 304.7001 Federal Acquisition Regulations System HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATIVE MATTERS Acquisition Instrument Identification Numbering System 304.7001 Numbering acquisitions....

  3. 48 CFR 434.004 - Acquisition strategy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Acquisition strategy. 434.004 Section 434.004 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 434.004 Acquisition strategy. (a) The...

  4. 48 CFR 1034.004 - Acquisition strategy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Acquisition strategy. 1034... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 1034.004 Acquisition strategy. (a) A program manager's acquisition strategy written at the system or investment level in accordance with FAR...

  5. 48 CFR 3034.004 - Acquisition strategy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Acquisition strategy. See (HSAR) 48 CFR 3009.570 for policy applicable to acquisition strategies that consider... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Acquisition strategy. 3034.004 Section 3034.004 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY,...

  6. 48 CFR 234.004 - Acquisition strategy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Acquisition strategy. 234..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION 234.004 Acquisition strategy. (1) See 209.570 for policy applicable to acquisition strategies that consider the use of lead...

  7. 48 CFR 234.004 - Acquisition strategy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Acquisition strategy. 234..., DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION 234.004 Acquisition strategy. (1) See 209.570 for policy applicable to acquisition strategies that consider the use of lead...

  8. 48 CFR 34.004 - Acquisition strategy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Acquisition strategy. 34... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 34.004 Acquisition strategy. The program manager, as specified in agency procedures, shall develop an acquisition strategy tailored to the...

  9. 48 CFR 3034.004 - Acquisition strategy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Acquisition strategy. See (HSAR) 48 CFR 3009.570 for policy applicable to acquisition strategies that consider... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Acquisition strategy. 3034.004 Section 3034.004 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY,...

  10. 48 CFR 34.004 - Acquisition strategy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acquisition strategy. 34... CATEGORIES OF CONTRACTING MAJOR SYSTEM ACQUISITION General 34.004 Acquisition strategy. The program manager, as specified in agency procedures, shall develop an acquisition strategy tailored to the...

  11. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  12. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  13. Ocean Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Johnson, B.; Cavanaugh, J.; Smith, J.; Esaias, W.

    1988-01-01

    The Ocean Data Acquisition System (ODAS) is a low cost instrument with potential commercial application. It is easily mounted on a small aircraft and flown over the coastal zone ocean to remotely measure sea surface temperature and three channels of ocean color information. From this data, chlorophyll levels can be derived for use by ocean scientists, fisheries, and environmental offices. Data can be transmitted to shipboard for real-time use with sea truth measurements, ocean productivity estimates and fishing fleet direction. The aircraft portion of the system has two primary instruments: an IR radiometer to measure sea surface temperature and a three channel visible spectro-radiometer for 460, 490, and 520 nm wavelength measurements from which chlorophyll concentration can be derived. The aircraft package contains a LORAN-C unit for aircraft location information, clock, on-board data processor and formatter, digital data storage, packet radio terminal controller, and radio transceiver for data transmission to a ship. The shipboard package contains a transceiver, packet terminal controller, data processing and storage capability, and printer. Both raw data and chlorophyll concentrations are available for real-time analysis.

  14. The acquisition of polysynthesis.

    PubMed

    Mithun, M

    1989-06-01

    Polysynthetic languages can present special extraction puzzles to children, due to the length of their words. A number of hypotheses concerning children's strategies for acquiring morphology, originally proposed on the basis of their approaches to somewhat simpler systems, are confirmed by observations of five children acquiring Mohawk. Among the Mohawk children, the earliest segmentation of words was phonological rather than morphological: stressed syllables, usually penultimate or antepenultimate, were extracted first. Ultimate syllables were then added, confirming the salience of the ends of words. During this time, distinctions expressed by adults in affixes were either omitted or expressed analytically. Acquisition then moved leftward by syllables. When most utterances were long enough to include pronominal prefixes as well as roots, morphological structure was apparently discovered. It is not surprising that the pronouns should trigger this awareness, since they are frequent, appearing with every verb and most nouns, they are functional, and they are semantically transparent. From this point on, the children acquired affixes primarily according to their utility and semantic transparency rather than their phonological shape or position. PMID:2760128

  15. Parallel Implicit Algorithms for CFD

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  16. Trigonometric Integrals via Partial Fractions

    ERIC Educational Resources Information Center

    Chen, H.; Fulford, M.

    2005-01-01

    Parametric differentiation is used to derive the partial fractions decompositions of certain rational functions. Those decompositions enable us to integrate some new combinations of trigonometric functions.

  17. Experts' understanding of partial derivatives using the partial derivative machine

    NASA Astrophysics Data System (ADS)

    Roundy, David; Weber, Eric; Dray, Tevian; Bajracharya, Rabindra R.; Dorko, Allison; Smith, Emily M.; Manogue, Corinne A.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] Partial derivatives are used in a variety of different ways within physics. Thermodynamics, in particular, uses partial derivatives in ways that students often find especially confusing. We are at the beginning of a study of the teaching of partial derivatives, with a goal of better aligning the teaching of multivariable calculus with the needs of students in STEM disciplines. In this paper, we report on an initial study of expert understanding of partial derivatives across three disciplines: physics, engineering, and mathematics. We report on the central research question of how disciplinary experts understand partial derivatives, and how their concept images of partial derivatives differ, with a focus on experimentally measured quantities. Using the partial derivative machine (PDM), we probed expert understanding of partial derivatives in an experimental context without a known functional form. In particular, we investigated which representations were cued by the experts' interactions with the PDM. Whereas the physicists and engineers were quick to use measurements to find a numeric approximation for a derivative, the mathematicians repeatedly returned to speculation as to the functional form; although they were comfortable drawing qualitative conclusions about the system from measurements, they were reluctant to approximate the derivative through measurement. On a theoretical front, we found ways in which existing frameworks for the concept of derivative could be expanded to include numerical approximation.

  18. Fast magnetic resonance spectroscopic imaging (MRSI) using wavelet encoding and parallel imaging: In vitro results

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Serrai, Hacene

    2011-07-01

    In previous work we have shown that wavelet encoding spectroscopic imaging (WE-SI) reduces acquisition time and voxel contamination compared to the standard Chemical Shift Imaging (CSI) also known as phase encoding (PE). In this paper, we combine the wavelet encoding method with parallel imaging (WE-PI) technique to further reduce the acquisition time by the acceleration factor R, and preserve the spatial metabolite distribution. Wavelet encoding provides results with a lower signal-to-noise ratio (SNR) than the phase encoding method. Their combination with parallel imaging, introduces an intrinsic SNR reduction. The rate of SNR reduction is slower in wavelet encoding with PI than PE with parallel imaging (PE-PI). This is due to the fact that in WE-PI, the SNR reduction is a function of the acceleration factor R and the voxel number N, whereas in PE-PI it is a function of the acceleration factor R only.

  19. Fast magnetic resonance spectroscopic imaging (MRSI) using wavelet encoding and parallel imaging: in vitro results.

    PubMed

    Fu, Yao; Serrai, Hacene

    2011-07-01

    In previous work we have shown that wavelet encoding spectroscopic imaging (WE-SI) reduces acquisition time and voxel contamination compared to the standard Chemical Shift Imaging (CSI) also known as phase encoding (PE). In this paper, we combine the wavelet encoding method with parallel imaging (WE-PI) technique to further reduce the acquisition time by the acceleration factor R, and preserve the spatial metabolite distribution. Wavelet encoding provides results with a lower signal-to-noise ratio (SNR) than the phase encoding method. Their combination with parallel imaging, introduces an intrinsic SNR reduction. The rate of SNR reduction is slower in wavelet encoding with PI than PE with parallel imaging (PE-PI). This is due to the fact that in WE-PI, the SNR reduction is a function of the acceleration factor R and the voxel number N, whereas in PE-PI it is a function of the acceleration factor R only. PMID:21514193

  20. Parallel computation and computers for artificial intelligence

    SciTech Connect

    Kowalik, J.S. )

    1988-01-01

    This book discusses Parallel Processing in Artificial Intelligence; Parallel Computing using Multilisp; Execution of Common Lisp in a Parallel Environment; Qlisp; Restricted AND-Parallel Execution of Logic Programs; PARLOG: Parallel Programming in Logic; and Data-driven Processing of Semantic Nets. Attention is also given to: Application of the Butterfly Parallel Processor in Artificial Intelligence; On the Range of Applicability of an Artificial Intelligence Machine; Low-level Vision on Warp and the Apply Programming Mode; AHR: A Parallel Computer for Pure Lisp; FAIM-1: An Architecture for Symbolic Multi-processing; and Overview of Al Application Oriented Parallel Processing Research in Japan.

  1. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  2. Regularization of parallel MRI reconstruction using in vivo coil sensitivities

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Otazo, Ricardo; Xu, Jian; Sodickson, Daniel K.

    2009-02-01

    Parallel MRI can achieve increased spatiotemporal resolution in MRI by simultaneously sampling reduced k-space data with multiple receiver coils. One requirement that different parallel MRI techniques have in common is the need to determine spatial sensitivity information for the coil array. This is often done by smoothing the raw sensitivities obtained from low-resolution calibration images, for example via polynomial fitting. However, this sensitivity post-processing can be both time-consuming and error-prone. Another important factor in Parallel MRI is noise amplification in the reconstruction, which is due to non-unity transformations in the image reconstruction associated with spatially correlated coil sensitivity profiles. Generally, regularization approaches, such as Tikhonov and SVD-based methods, are applied to reduce SNR loss, at the price of introducing residual aliasing. In this work, we present a regularization approach using in vivo coil sensitivities in parallel MRI to overcome these potential errors into the reconstruction. The mathematical background of the proposed method is explained, and the technique is demonstrated with phantom images. The effectiveness of the proposed method is then illustrated clinically in a whole-heart 3D cardiac MR acquisition within a single breath-hold. The proposed method can not only overcome the sensitivity calibration problem, but also suppress a substantial portion of reconstruction-related noise without noticeable introduction of residual aliasing artifacts.

  3. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  4. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  5. 48 CFR 873.105 - Acquisition planning.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... equipment or space, where the acquisition is expected to exceed the simplified acquisition threshold (SAT... particular acquisition expected to exceed the SAT. The team should consist of a mix of staff, appropriate...

  6. 48 CFR 873.105 - Acquisition planning.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... equipment or space, where the acquisition is expected to exceed the simplified acquisition threshold (SAT... particular acquisition expected to exceed the SAT. The team should consist of a mix of staff, appropriate...

  7. 32 CFR 644.88 - Other acquisition.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... District Engineer, pursuant to 43 CFR part 295, as soon as a real estate directive is issued. (i) If use... HANDBOOK Acquisition Acquisition by Purchase, Donation, and Transfer § 644.88 Other acquisition....

  8. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  9. Forces and pressures in adsorbing partially directed walks

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Prellberg, T.

    2016-05-01

    Polymers in confined spaces lose conformational entropy. This induces a net repulsive entropic force on the walls of the confining space. A model for this phenomenon is a lattice walk between confining walls, and in this paper a model of an adsorbing partially directed walk is used. The walk is placed in a half square lattice {{{L}}}+2 with boundary \\partial {{{L}}}+2, and confined between two vertical parallel walls, which are vertical lines in the lattice, a distance w apart. The free energy of the walk is determined, as a function of w, for walks with endpoints in the confining walls and adsorbing in \\partial {{{L}}}+2. This gives the entropic force on the confining walls as a function of w. It is shown that there are zero force points in this model and the locations of these points are determined, in some cases exactly, and in other cases asymptotically.

  10. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  11. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  12. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  13. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  14. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  15. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  16. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  17. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  18. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  19. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  20. Neurophysiological preconditions of syntax acquisition.

    PubMed

    Friederici, Angela D; Oberecker, Regine; Brauer, Jens

    2012-03-01

    Although the neural network for language processing in the adult brain is well specified, the neural underpinning of language acquisition is still underdetermined. Here, we define the milestones of syntax acquisition and discuss the possible neurophysiological preconditions thereof. Early language learning seems to be based on the bilateral temporal cortices. Subsequent syntax acquisition apparently primarily recruits a neural network involving the left frontal cortex and the temporal cortex connected by a ventrally located fiber system. The late developing ability to comprehend syntactically complex sentences appears to require a neural network that connects Broca's area to the left posterior temporal cortex via a dorsally located fiber pathway. Thus, acquisition of syntax requires the maturation of fiber bundles connecting the classical language-relevant brain regions. PMID:21706312

  1. Microcomputer Acquisition Standards and Controls.

    ERIC Educational Resources Information Center

    Wold, Geoffrey H.

    1987-01-01

    Increased use of microcomputers in schools can be implemented more effectively when management develops acquisitions standards and controls. Technical standards as well as operational and documentation standards are outlined. (MLF)

  2. STIS Target Acquisitions During SMOV

    NASA Astrophysics Data System (ADS)

    Katsanis, Rocio M.; Downes, Ron; Hartig, George; Kraemer, Steve

    1997-07-01

    We summarize the first results on the analysis of in-flight STIS target acquisition (ACQ and ACQ/PEAK). These results show that the STIS target acquisition (ACQ) is working very accurately for point sources (within 0.5 pixels = 0.025 arcseconds), about 4 times better than specified in the Instrument Handbook. As a result of the accuracy of the ACQ algorithm, we are no longer recommending to perform ACQ/PEAKs for the 0.2 arcsecond wide slits. For diffuse acquisitions the accuracy varies with target size. Although analysis of ACQ/PEAK data is hampered by a flight software problem, we anticipate that peakups will be accurate to roughly ±5% of the slit width (instead of the ±15% pr eviously advertised). We are implementing several enhancements to the flight software that will take effect by mid- August to improve the quality of the acquisitions.

  3. Parallel multiscale simulations of a brain aneurysm

    NASA Astrophysics Data System (ADS)

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier-Stokes solver NɛκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NɛκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in future

  4. Parallel multiscale simulations of a brain aneurysm

    SciTech Connect

    Grinberg, Leopold; Fedosov, Dmitry A.; Karniadakis, George Em

    2013-07-01

    Cardiovascular pathologies, such as a brain aneurysm, are affected by the global blood circulation as well as by the local microrheology. Hence, developing computational models for such cases requires the coupling of disparate spatial and temporal scales often governed by diverse mathematical descriptions, e.g., by partial differential equations (continuum) and ordinary differential equations for discrete particles (atomistic). However, interfacing atomistic-based with continuum-based domain discretizations is a challenging problem that requires both mathematical and computational advances. We present here a hybrid methodology that enabled us to perform the first multiscale simulations of platelet depositions on the wall of a brain aneurysm. The large scale flow features in the intracranial network are accurately resolved by using the high-order spectral element Navier–Stokes solver NεκTαr. The blood rheology inside the aneurysm is modeled using a coarse-grained stochastic molecular dynamics approach (the dissipative particle dynamics method) implemented in the parallel code LAMMPS. The continuum and atomistic domains overlap with interface conditions provided by effective forces computed adaptively to ensure continuity of states across the interface boundary. A two-way interaction is allowed with the time-evolving boundary of the (deposited) platelet clusters tracked by an immersed boundary method. The corresponding heterogeneous solvers (NεκTαr and LAMMPS) are linked together by a computational multilevel message passing interface that facilitates modularity and high parallel efficiency. Results of multiscale simulations of clot formation inside the aneurysm in a patient-specific arterial tree are presented. We also discuss the computational challenges involved and present scalability results of our coupled solver on up to 300 K computer processors. Validation of such coupled atomistic-continuum models is a main open issue that has to be addressed in

  5. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  6. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  7. The AIS-5000 parallel processor

    SciTech Connect

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In this paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.

  8. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  9. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  10. Parallel deterioration to language processing in a bilingual speaker.

    PubMed

    Druks, Judit; Weekes, Brendan Stuart

    2013-01-01

    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122]. PMID:24527801

  11. Parallel deterioration to language processing in a bilingual speaker.

    PubMed

    Druks, Judit; Weekes, Brendan Stuart

    2013-01-01

    The convergence hypothesis [Green, D. W. (2003). The neural basis of the lexicon and the grammar in L2 acquisition: The convergence hypothesis. In R. van Hout, A. Hulk, F. Kuiken, & R. Towell (Eds.), The interface between syntax and the lexicon in second language acquisition (pp. 197-218). Amsterdam: John Benjamins] assumes that the neural substrates of language representations are shared between the languages of a bilingual speaker. One prediction of this hypothesis is that neurodegenerative disease should produce parallel deterioration to lexical and grammatical processing in bilingual aphasia. We tested this prediction with a late bilingual Hungarian (first language, L1)-English (second language, L2) speaker J.B. who had nonfluent progressive aphasia (NFPA). J.B. had acquired L2 in adolescence but was premorbidly proficient and used English as his dominant language throughout adult life. Our investigations showed comparable deterioration to lexical and grammatical knowledge in both languages during a one-year period. Parallel deterioration to language processing in a bilingual speaker with NFPA challenges the assumption that L1 and L2 rely on different brain mechanisms as assumed in some theories of bilingual language processing [Ullman, M. T. (2001). The neural basis of lexicon and grammar in first and second language: The declarative/procedural model. Bilingualism: Language and Cognition, 4(1), 105-122].

  12. Time-parallel iterative methods for parabolic PDES: Multigrid waveform relaxation and time-parallel multigrid

    SciTech Connect

    Vandewalle, S.

    1994-12-31

    Time-stepping methods for parabolic partial differential equations are essentially sequential. This prohibits the use of massively parallel computers unless the problem on each time-level is very large. This observation has led to the development of algorithms that operate on more than one time-level simultaneously; that is to say, on grids extending in space and in time. The so-called parabolic multigrid methods solve the time-dependent parabolic PDE as if it were a stationary PDE discretized on a space-time grid. The author has investigated the use of multigrid waveform relaxation, an algorithm developed by Lubich and Ostermann. The algorithm is based on a multigrid acceleration of waveform relaxation, a highly concurrent technique for solving large systems of ordinary differential equations. Another method of this class is the time-parallel multigrid method. This method was developed by Hackbusch and was recently subject of further study by Horton. It extends the elliptic multigrid idea to the set of equations that is derived by discretizing a parabolic problem in space and in time.

  13. A role for the developing lexicon in phonetic category acquisition

    PubMed Central

    Feldman, Naomi H.; Griffiths, Thomas L.; Goldwater, Sharon; Morgan, James L.

    2013-01-01

    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. PMID:24219848

  14. Parallel execution of LISP programs

    SciTech Connect

    Weening, J.S.

    1989-01-01

    This dissertation considers several issues in the execution of Lisp programs on shared-memory multiprocessors. An overview of constructs for explicit parallelism in Lisp is first presented. The problems of partitioning a program into processes and scheduling these processes are then described, and a number of methods for performing these are proposed. These include cutting off process creation based on properties of the computation tree of the program, and basing partitioning decisions on the state of the system at runtime instead of the program. An experimental study of these methods has been performed using a simulator for parallel Lisp. The simulator, written in common Lisp using a continuation-passing style, is described in detail. This is followed by a description of the experiments that were performed and an analysis of the results. Two programs are used as illustrations-a Fast Fourier Transform, which has an abundance of parallelism, and the Cocke-Younger-Kasami parsing algorithm, for which good speedup is not as easy to obtain. The difficulty of using cutoff-based partitioning methods, and the differences between various scheduling methods, are shown. A combination of partitioning and scheduling methods which the author calls dynamic partitioning is analyzed in more detail. This method is based on examining the machine's runtime state; it requires that the programmer only identify parallelism in the program, without deciding which potential parallelism is actually useful. Several theorems are proved providing upper bounds on the amount of overhead produced by this method. He concludes that for programs whose computation trees have small height relative to their total size, dynamic partitioning can achieve asymptotically minimal overhead in the cost of process creation.

  15. Partial confinement photonic crystal waveguides

    SciTech Connect

    Saini, S.; Hong, C.-Y.; Pfaff, N.; Kimerling, L. C.; Michel, J.

    2008-12-29

    One-dimensional photonic crystal waveguides with an incomplete photonic band gap are modeled and proposed for an integration application that exploits their property of partial angular confinement. Planar apodized photonic crystal structures are deposited by plasma enhanced chemical vapor deposition and characterized by reflectivity as a function of angle and polarization, validating a partial confinement design for light at 850 nm wavelength. Partial confinement identifies an approach for tailoring waveguide properties by the exploitation of conformal film deposition over a substrate with angularly dependent topology. An application for an optoelectronic transceiver is demonstrated.

  16. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such

  17. Influence of resonant charge exchange on the viscosity of partially ionized plasma in a magnetic field

    SciTech Connect

    Zhdanov, V. M. Stepanenko, A. A.

    2013-12-15

    The influence of resonant charge exchange for ion-atom interaction on the viscosity of partially ionized plasma embedded in the magnetic field is investigated. The general system of equations used to derive the viscosity coefficients for an arbitrary plasma component in the 21-moment approximation of Grad’s method is presented. The expressions for the coefficients of total and partial viscosities of a multicomponent partially ionized plasma in the magnetic field are obtained. As an example, the coefficients of the parallel and transverse viscosities for the ionic and neutral components of the partially ionized hydrogen plasma are calculated. It is shown that the account for resonant charge exchange can lead to a substantial change of the parallel and transverse viscosity of the plasma components in the region of low degrees of ionization on the order of 0.1.

  18. Partial-Payload Support Structure

    NASA Technical Reports Server (NTRS)

    Mitchell, R.; Freeman, M.

    1984-01-01

    Partial-payload support structure (PPSS) is modular, bridge like structure supporting experiments weighing up to 2 tons. PPSS handles such experiments more economically than standard Spacelab pallet system.

  19. 48 CFR 7.402 - Acquisition methods.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time...

  20. 48 CFR 7.402 - Acquisition methods.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Acquisition methods. 7.402... ACQUISITION PLANNING Equipment Lease or Purchase 7.402 Acquisition methods. (a) Purchase method. (1) Generally, the purchase method is appropriate if the equipment will be used beyond the point in time...

  1. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  2. Numerical computation on massively parallel hypercubes. [Connection machine

    SciTech Connect

    McBryan, O.A.

    1986-01-01

    We describe numerical computations on the Connection Machine, a massively parallel hypercube architecture with 65,536 single-bit processors and 32 Mbytes of memory. A parallel extension of COMMON LISP, provides access to the processors and network. The rich software environment is further enhanced by a powerful virtual processor capability, which extends the degree of fine-grained parallelism beyond 1,000,000. We briefly describe the hardware and indicate the principal features of the parallel programming environment. We then present implementations of SOR, multigrid and pre-conditioned conjugate gradient algorithms for solving partial differential equations on the Connection Machine. Despite the lack of floating point hardware, computation rates above 100 megaflops have been achieved in PDE solution. Virtual processors prove to be a real advantage, easing the effort of software development while improving system performance significantly. The software development effort is also facilitated by the fact that hypercube communications prove to be fast and essentially independent of distance. 29 refs., 4 figs.

  3. Accuracy of different impression materials in parallel and nonparallel implants

    PubMed Central

    Vojdani, Mahroo; Torabi, Kianoosh; Ansarifard, Elham

    2015-01-01

    Background: A precise impression is mandatory to obtain passive fit in implant-supported prostheses. The aim of this study was to compare the accuracy of three impression materials in both parallel and nonparallel implant positions. Materials and Methods: In this experimental study, two partial dentate maxillary acrylic models with four implant analogues in canines and lateral incisors areas were used. One model was simulating the parallel condition and the other nonparallel one, in which implants were tilted 30° bucally and 20° in either mesial or distal directions. Thirty stone casts were made from each model using polyether (Impregum), additional silicone (Monopren) and vinyl siloxanether (Identium), with open tray technique. The distortion values in three-dimensions (X, Y and Z-axis) were measured by coordinate measuring machine. Two-way analysis of variance (ANOVA), one-way ANOVA and Tukey tests were used for data analysis (α = 0.05). Results: Under parallel condition, all the materials showed comparable, accurate casts (P = 0.74). In the presence of angulated implants, while Monopren showed more accurate results compared to Impregum (P = 0.01), Identium yielded almost similar results to those produced by Impregum (P = 0.27) and Monopren (P = 0.26). Conclusion: Within the limitations of this study, in parallel conditions, the type of impression material cannot affect the accuracy of the implant impressions; however, in nonparallel conditions, polyvinyl siloxane is shown to be a better choice, followed by vinyl siloxanether and polyether respectively. PMID:26288620

  4. Dynamic Load Balancing Strategies for Parallel Reacting Flow Simulations

    NASA Astrophysics Data System (ADS)

    Pisciuneri, Patrick; Meneses, Esteban; Givi, Peyman

    2014-11-01

    Load balancing in parallel computing aims at distributing the work as evenly as possible among the processors. This is a critical issue in the performance of parallel, time accurate, flow simulators. The constraint of time accuracy requires that all processes must be finished with their calculation for a given time step before any process can begin calculation of the next time step. Thus, an irregularly balanced compute load will result in idle time for many processes for each iteration and thus increased walltimes for calculations. Two existing, dynamic load balancing approaches are applied to the simplified case of a partially stirred reactor for methane combustion. The first is Zoltan, a parallel partitioning, load balancing, and data management library developed at the Sandia National Laboratories. The second is Charm++, which is its own machine independent parallel programming system developed at the University of Illinois at Urbana-Champaign. The performance of these two approaches is compared, and the prospects for their application to full 3D, reacting flow solvers is assessed.

  5. Complex partial status and schizophrenia.

    PubMed

    Ardila, A; Gómez, J

    1988-04-01

    Three cases of complex partial status which were diagnosed as psychotic episodes are presented. The scans of two of these cases show structural abnormalities in the left temporal lobe. It is proposed that there are similar neurophysiological mechanisms in primary schizophrenia and in the perceptual, affective and cognitive phenomena apparent is some complex and psychic partial seizures. The hippocampal-amygdaline system plays a central role in both cases.

  6. Ultra-fast parallel magnetic resonance imaging of granular systems

    NASA Astrophysics Data System (ADS)

    Penn, Alexander; Pruessmann, Klaas P.; Müller, Christoph

    2015-03-01

    Several non-intrusive techniques have been applied to probe the dynamics of two-phase granular systems, with the most prominent examples being X-ray tomography, positron emission particle tracking (PEPT), electrical capacitance tomography and magnetic resonance imaging (MRI). MRI comes with the particular advantage that by implementing suitable pulse sequences not only spin densities (i.e. voidage), but also velocity, acceleration, diffusion and chemical reactions can be measured. However, so far the investigation of two-phase granular systems has been performed on relatively small-bore systems (max. diameter 60 mm). Such systems are, however, heavily influenced by wall effects. Furthermore, largely only single-coil detection has been employed, limiting severely the temporal resolution of the data acquisition. Here, we report the acquisition of ultra-fast MRI measurements in large volume vessels using medical MRI scanners. Specifically, parallel MRI, i.e. the simultaneous use of multiple receiver coils, has been exploited to speed up the data acquisition. In combination with advanced pulse sequences, we were able to probe the rapid dynamics (voidage and velocity measurements) of gas-solid systems.

  7. [Acrylic resin removable partial dentures].

    PubMed

    de Baat, C; Witter, D J; Creugers, N H J

    2011-01-01

    An acrylic resin removable partial denture is distinguished from other types of removable partial dentures by an all-acrylic resin base which is, in principle, solely supported by the edentulous regions of the tooth arch and in the maxilla also by the hard palate. When compared to the other types of removable partial dentures, the acrylic resin removable partial denture has 3 favourable aspects: the economic aspect, its aesthetic quality and the ease with which it can be extended and adjusted. Disadvantages are an increased risk of caries developing, gingivitis, periodontal disease, denture stomatitis, alveolar bone reduction, tooth migration, triggering of the gag reflex and damage to the acrylic resin base. Present-day indications are ofa temporary or palliative nature or are motivated by economic factors. Special varieties of the acrylic resin removable partial denture are the spoon denture, the flexible denture fabricated of non-rigid acrylic resin, and the two-piece sectional denture. Furthermore, acrylic resin removable partial dentures can be supplied with clasps or reinforced by fibers or metal wires.

  8. 48 CFR 52.247-19 - Stopping in Transit for Partial Unloading.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 2 2013-10-01 2013-10-01 false Stopping in Transit for... Clauses 52.247-19 Stopping in Transit for Partial Unloading. As prescribed in 47.207-6(c)(5)(ii), insert... origin to two or more consignees along the route between origin and last destination: Stopping in...

  9. 48 CFR 52.247-19 - Stopping in Transit for Partial Unloading.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Stopping in Transit for... Clauses 52.247-19 Stopping in Transit for Partial Unloading. As prescribed in 47.207-6(c)(5)(ii), insert... origin to two or more consignees along the route between origin and last destination: Stopping in...

  10. 48 CFR 52.247-19 - Stopping in Transit for Partial Unloading.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 2 2014-10-01 2014-10-01 false Stopping in Transit for... Clauses 52.247-19 Stopping in Transit for Partial Unloading. As prescribed in 47.207-6(c)(5)(ii), insert... origin to two or more consignees along the route between origin and last destination: Stopping in...

  11. 48 CFR 52.247-19 - Stopping in Transit for Partial Unloading.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Stopping in Transit for... Clauses 52.247-19 Stopping in Transit for Partial Unloading. As prescribed in 47.207-6(c)(5)(ii), insert... origin to two or more consignees along the route between origin and last destination: Stopping in...

  12. 48 CFR 52.247-19 - Stopping in Transit for Partial Unloading.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 2 2012-10-01 2012-10-01 false Stopping in Transit for... Clauses 52.247-19 Stopping in Transit for Partial Unloading. As prescribed in 47.207-6(c)(5)(ii), insert... origin to two or more consignees along the route between origin and last destination: Stopping in...

  13. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  14. A generalized parallel replica dynamics

    SciTech Connect

    Binder, Andrew; Lelièvre, Tony; Simpson, Gideon

    2015-03-01

    Metastability is a common obstacle to performing long molecular dynamics simulations. Many numerical methods have been proposed to overcome it. One method is parallel replica dynamics, which relies on the rapid convergence of the underlying stochastic process to a quasi-stationary distribution. Two requirements for applying parallel replica dynamics are knowledge of the time scale on which the process converges to the quasi-stationary distribution and a mechanism for generating samples from this distribution. By combining a Fleming–Viot particle system with convergence diagnostics to simultaneously identify when the process converges while also generating samples, we can address both points. This variation on the algorithm is illustrated with various numerical examples, including those with entropic barriers and the 2D Lennard-Jones cluster of seven atoms.

  15. Scans as primitive parallel operations

    SciTech Connect

    Blelloch, G.E. . Dept. of Computer Science)

    1989-11-01

    In most parallel random access machine (PRAM) models, memory references are assumed to take unit time. In practice, and in theory, certain scan operations, also known as prefix computations, can execute in no more time than these parallel memory references. This paper outlines an extensive study of the effect of including, in the PRAM models, such scan operations as unit-time primitives. The study concludes that the primitives improve the asymptotic running time of many algorithms by an O(log n) factor greatly simplify the description of many algorithms, and are significantly easier to implement than memory references. The authors argue that the algorithm designer should feel free to use these operations as if they were as cheap as a memory reference. This paper describes five algorithms that clearly illustrate how the scan primitives can be used in algorithm design. These all run on an EREW PRAM with the addition of two scan primitives.

  16. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  17. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  18. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  19. Parallelizing the XSTAR Photoionization Code

    NASA Astrophysics Data System (ADS)

    Noble, M. S.; Ji, L.; Young, A.; Lee, J. C.

    2009-09-01

    We describe two means by which XSTAR, a code which computes physical conditions and emission spectra of photoionized gases, has been parallelized. The first is pvmxstar, a wrapper which can be used in place of the serial xstar2xspec script to foster concurrent execution of the XSTAR command line application on independent sets of parameters. The second is pmodel, a plugin for the Interactive Spectral Interpretation System (ISIS) which allows arbitrary components of a broad range of astrophysical models to be distributed across processors during fitting and confidence limits calculations, by scientists with little training in parallel programming. Plugging the XSTAR family of analytic models into pmodel enables multiple ionization states (e.g., of a complex absorber/emitter) to be computed simultaneously, alleviating the often prohibitive expense of the traditional serial approach. Initial performance results indicate that these methods substantially enlarge the problem space to which XSTAR may be applied within practical timeframes.

  20. Parallel strategies for SAR processing

    NASA Astrophysics Data System (ADS)

    Segoviano, Jesus A.

    2004-12-01

    This article proposes a series of strategies for improving the computer process of the Synthetic Aperture Radar (SAR) signal treatment, following the three usual lines of action to speed up the execution of any computer program. On the one hand, it is studied the optimization of both, the data structures and the application architecture used on it. On the other hand it is considered a hardware improvement. For the former, they are studied both, the usually employed SAR process data structures, proposing the use of parallel ones and the way the parallelization of the algorithms employed on the process is implemented. Besides, the parallel application architecture classifies processes between fine/coarse grain. These are assigned to individual processors or separated in a division among processors, all of them in their corresponding architectures. For the latter, it is studied the hardware employed on the computer parallel process used in the SAR handling. The improvement here refers to several kinds of platforms in which the SAR process is implemented, shared memory multicomputers, and distributed memory multiprocessors. A comparison between them gives us some guidelines to follow in order to get a maximum throughput with a minimum latency and a maximum effectiveness with a minimum cost, all together with a limited complexness. It is concluded and described, that the approach consisting of the processing of the algorithms in a GNU/Linux environment, together with a Beowulf cluster platform offers, under certain conditions, the best compromise between performance and cost, and promises the major development in the future for the Synthetic Aperture Radar computer power thirsty applications in the next years.

  1. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  2. Parallel fabrication of nanogap electrodes.

    PubMed

    Johnston, Danvers E; Strachan, Douglas R; Johnson, A T Charlie

    2007-09-01

    We have developed a technique for simultaneously fabricating large numbers of nanogaps in a single processing step using feedback-controlled electromigration. Parallel nanogap formation is achieved by a balanced simultaneous process that uses a novel arrangement of nanoscale shorts between narrow constrictions where the nanogaps form. Because of this balancing, the fabrication of multiple nanoelectrodes is similar to that of a single nanogap junction. The technique should be useful for constructing complex circuits of molecular-scale electronic devices.

  3. High temporal resolution functional MRI with partial separability model.

    PubMed

    Ngo, Giang-Chau; Holtrop, Joseph L; Fu, Maojing; Lam, Fan; Sutton, Bradley P

    2015-01-01

    Even though the hemodynamic response is a slow phenomenon, high temporal resolution in functional fMRI can enable better differentiation between the signal of interest and physiological noise or increase the statistical power of functional studies. To increase the temporal resolution, several methods have been developed to decrease the repetition time, TR, such as simultaneous multi-slice imaging and MR encephalography approaches. In this work, a method using a fast acquisition and a partial separability model is presented to achieve a multi-slice fMRI protocol at a temporal resolution of 75 ms. The method is demonstrated on a visual block task. PMID:26738022

  4. Hierarchically parallelized constrained nonlinear solvers with automated substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Kwang, A.

    1991-01-01

    This paper develops a parallelizable multilevel constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure, both sequential, partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capacity to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  5. Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Kwang, Abel

    1994-01-01

    This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.

  6. Massively parallel femtosecond laser processing.

    PubMed

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  7. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  8. Computational models of syntactic acquisition.

    PubMed

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.

  9. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling.

  10. 48 CFR 1318.270 - Emergency acquisition flexibilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... flexibilities. 1318.270 Section 1318.270 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Emergency Acquisition Flexibilities 1318.270 Emergency acquisition flexibilities. (a) Authorizing emergency acquisition flexibilities. The process...

  11. 48 CFR 1318.270 - Emergency acquisition flexibilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... flexibilities. 1318.270 Section 1318.270 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Emergency Acquisition Flexibilities 1318.270 Emergency acquisition flexibilities. (a) Authorizing emergency acquisition flexibilities. The process...

  12. 48 CFR 1318.270 - Emergency acquisition flexibilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... flexibilities. 1318.270 Section 1318.270 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Emergency Acquisition Flexibilities 1318.270 Emergency acquisition flexibilities. (a) Authorizing emergency acquisition flexibilities. The process...

  13. 48 CFR 1318.270 - Emergency acquisition flexibilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... flexibilities. 1318.270 Section 1318.270 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Emergency Acquisition Flexibilities 1318.270 Emergency acquisition flexibilities. (a) Authorizing emergency acquisition flexibilities. The process...

  14. 48 CFR 1318.270 - Emergency acquisition flexibilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... flexibilities. 1318.270 Section 1318.270 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES EMERGENCY ACQUISITIONS Emergency Acquisition Flexibilities 1318.270 Emergency acquisition flexibilities. (a) Authorizing emergency acquisition flexibilities. The process...

  15. Conceptual Knowledge Acquisition in Biomedicine: A Methodological Review

    PubMed Central

    Payne, Philip R.O.; Mendonça, Eneida A.; Johnson, Stephen B.; Starren, Justin B.

    2007-01-01

    The use of conceptual knowledge collections or structures within the biomedical domain is pervasive, spanning a variety of applications including controlled terminologies, semantic networks, ontologies, and database schemas. A number of theoretical constructs and practical methods or techniques support the development and evaluation of conceptual knowledge collections. This review will provide an overview of the current state of knowledge concerning conceptual knowledge acquisition, drawing from multiple contributing academic disciplines such as biomedicine, computer science, cognitive science, education, linguistics, semiotics, and psychology. In addition, multiple taxonomic approaches to the description and selection of conceptual knowledge acquisition and evaluation techniques will be proposed in order to partially address the apparent fragmentation of the current literature concerning this domain. PMID:17482521

  16. Conceptual knowledge acquisition in biomedicine: A methodological review.

    PubMed

    Payne, Philip R O; Mendonça, Eneida A; Johnson, Stephen B; Starren, Justin B

    2007-10-01

    The use of conceptual knowledge collections or structures within the biomedical domain is pervasive, spanning a variety of applications including controlled terminologies, semantic networks, ontologies, and database schemas. A number of theoretical constructs and practical methods or techniques support the development and evaluation of conceptual knowledge collections. This review will provide an overview of the current state of knowledge concerning conceptual knowledge acquisition, drawing from multiple contributing academic disciplines such as biomedicine, computer science, cognitive science, education, linguistics, semiotics, and psychology. In addition, multiple taxonomic approaches to the description and selection of conceptual knowledge acquisition and evaluation techniques will be proposed in order to partially address the apparent fragmentation of the current literature concerning this domain.

  17. The new BNL partial wave analysis programs

    SciTech Connect

    Cummings, J.P.; Weygand, D.P.

    1997-07-29

    Experiment E852 at Brookhaven National Laboratory is a meson spectroscopy experiment which took data at the Multi-Particle Spectrometer facility of the Alternating Gradient Syncrotron. Upgrades to the spectrometer`s data acquisition and trigger electronics allowed over 900 million data events, of numerous topologies, to be recorded to tape in 1995 running alone. One of the primary goals of E852 is identification of states beyond the quark model, i.e., states with gluonic degrees of freedom. Identification of such states involves the measurement of a systems spin-parity. Such a measurement is usually done using Partial Wave Analysis. Programs to perform such analyses exist, in fact, one was written at BNL and used in previous experiments by some of this group. This program, however, was optimized for a particular final state, and modification to allow analysis of the broad range of final states in E852 would have been difficult. The authors therefore decided to write a new program, with an eye towards generality that would allow analysis of a large class of reactions.

  18. 78 FR 67928 - Land Acquisitions: Appeals of Land Acquisition Decisions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... for issuing trust acquisition decisions. 78 FR 32214. BIA then extended the original comment deadline... because it ensures consistency in the decision- making across BIA regions and addresses any procedural... interest without a judicial remedy. Response: The decision-making process set forth at part 151 requires...

  19. Parallel micromanipulation method for microassembly

    NASA Astrophysics Data System (ADS)

    Sin, Jeongsik; Stephanou, Harry E.

    2001-09-01

    Microassembly deals with micron or millimeter scale objects where the tolerance requirements are in the micron range. Typical applications include electronics components (silicon fabricated circuits), optoelectronics components (photo detectors, emitters, amplifiers, optical fibers, microlenses, etc.), and MEMS (Micro-Electro-Mechanical-System) dies. The assembly processes generally require not only high precision but also high throughput at low manufacturing cost. While conventional macroscale assembly methods have been utilized in scaled down versions for microassembly applications, they exhibit limitations on throughput and cost due to the inherently serialized process. Since the assembly process depends heavily on the manipulation performance, an efficient manipulation method for small parts will have a significant impact on the manufacturing of miniaturized products. The objective of this study on 'parallel micromanipulation' is to achieve these three requirements through the handling of multiple small parts simultaneously (in parallel) with high precision (micromanipulation). As a step toward this objective, a new manipulation method is introduced. The method uses a distributed actuation array for gripper free and parallel manipulation, and a centralized, shared actuator for simplified controls. The method has been implemented on a testbed 'Piezo Active Surface (PAS)' in which an actively generated friction force field is the driving force for part manipulation. Basic motion primitives, such as translation and rotation of objects, are made possible with the proposed method. This study discusses the design of the proposed manipulation method PAS, and the corresponding manipulation mechanism. The PAS consists of two piezoelectric actuators for X and Y motion, two linear motion guides, two sets of nozzle arrays, and solenoid valves to switch the pneumatic suction force on and off in individual nozzles. One array of nozzles is fixed relative to the surface on

  20. Second Language Acquisition of Reflexive Verbs in Russian by L1 Speakers of English

    ERIC Educational Resources Information Center

    Alexieva, Petia Dimitrova

    2012-01-01

    This dissertation examines the process of acquisition of semantic classes of reflexive verbs (RVs) in Russian by L2 learners with a native language English. The purpose of this study is to bridge the gap between current linguistic knowledge and the pedagogical literature existing in English on reflexives in Russian. RVs are taught partially and…

  1. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    NASA Astrophysics Data System (ADS)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  2. Reasonable partiality in professional relationships.

    PubMed

    Almond, Brenda

    2005-04-01

    First, two aspects of the partiality issue are identified: (1) Is it right/reasonable for professionals to favour their clients' interests over either those of other individuals or those of society in general? (2) Are special non-universalisable obligations attached to certain professional roles? Second, some comments are made on the notions of partiality and reasonableness. On partiality, the assumption that only two positions are possible--a detached universalism or a partialist egoism--is challenged and it is suggested that partiality, e.g. to family members, lies between these two positions, being neither a form of egoism, nor of impersonal detachment. On reasonableness, it is pointed out that 'reasonable' is an ambiguous concept, eliding the notions of the 'morally right' and the 'rational.' Third, a series of practical examples are taken from counselling, medicine, law, education and religious practice and some common principles are abstracted from the cases and discussed. These include truth-telling, confidentiality, conflicts of interest between clients and particular others and between clients and society. It is concluded that while partiality can be justified as a useful tool in standard cases, particular circumstances can affect the final verdict.

  3. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    ERIC Educational Resources Information Center

    Liu, Ran; Holt, Lori L.

    2011-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by…

  4. The Acquisition of Pronouns by French Children: A Parallel Study of Production and Comprehension

    ERIC Educational Resources Information Center

    Zesiger, Pascal; Zesiger, Laurence Chillier; Arabatzi, Marina; Baranzini, Lara; Cronel-Ohayon, Stephany; Franck, Julie; Frauenfelder, Ulrich Hans; Hamann, Cornelia; Rizzi, Luigi

    2010-01-01

    This study examines syntactic and morphological aspects of the production and comprehension of pronouns by 99 typically developing French-speaking children aged 3 years, 5 months to 6 years, 5 months. A fine structural analysis of subject, object, and reflexive clitics suggests that whereas the object clitic chain crosses the subject chain, the…

  5. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  6. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  7. Landsliding in partially saturated materials

    NASA Astrophysics Data System (ADS)

    Godt, Jonathan W.; Baum, Rex L.; Lu, Ning

    2009-01-01

    Rainfall-induced landslides are pervasive in hillslope environments around the world and among the most costly and deadly natural hazards. However, capturing their occurrence with scientific instrumentation in a natural setting is extremely rare. The prevailing thinking on landslide initiation, particularly for those landslides that occur under intense precipitation, is that the failure surface is saturated and has positive pore-water pressures acting on it. Most analytic methods used for landslide hazard assessment are based on the above perception and assume that the failure surface is located beneath a water table. By monitoring the pore water and soil suction response to rainfall, we observed shallow landslide occurrence under partially saturated conditions for the first time in a natural setting. We show that the partially saturated shallow landslide at this site is predictable using measured soil suction and water content and a novel unified effective stress concept for partially saturated earth materials.

  8. Landsliding in partially saturated materials

    USGS Publications Warehouse

    Godt, J.W.; Baum, R.L.; Lu, N.

    2009-01-01

    [1] Rainfall-induced landslides are pervasive in hillslope environments around the world and among the most costly and deadly natural hazards. However, capturing their occurrence with scientific instrumentation in a natural setting is extremely rare. The prevailing thinking on landslide initiation, particularly for those landslides that occur under intense precipitation, is that the failure surface is saturated and has positive pore-water pressures acting on it. Most analytic methods used for landslide hazard assessment are based on the above perception and assume that the failure surface is located beneath a water table. By monitoring the pore water and soil suction response to rainfall, we observed shallow landslide occurrence under partially saturated conditions for the first time in a natural setting. We show that the partially saturated shallow landslide at this site is predictable using measured soil suction and water content and a novel unified effective stress concept for partially saturated earth materials. Copyright 2009 by the American Geophysical Union.

  9. Functionalism in Second Language Acquisition.

    ERIC Educational Resources Information Center

    Tomlin, Russell S.

    1990-01-01

    Examines the role of functional approaches to linguistics in understanding second-language acquisition (SLA), focusing on central premises, tenets, and theoretical problems. It is concluded that functional universals are too insufficiently grounded theoretically and empirically to contribute more than heuristic guidance to SLA theory. (141…

  10. Analog Input Data Acquisition Software

    NASA Technical Reports Server (NTRS)

    Arens, Ellen

    2009-01-01

    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  11. Language Acquisition and Language Revitalization

    ERIC Educational Resources Information Center

    O'Grady, William; Hattori, Ryoko

    2016-01-01

    Intergenerational transmission, the ultimate goal of language revitalization efforts, can only be achieved by (re)establishing the conditions under which an imperiled language can be acquired by the community's children. This paper presents a tutorial survey of several key points relating to language acquisition and maintenance in children,…

  12. Acquisition streamlining: A cultural change

    NASA Technical Reports Server (NTRS)

    Stewart, Jesse

    1992-01-01

    The topics are presented in viewgraph form and include the following: the defense systems management college, educational philosophy, the defense acquisition environment, streamlining initiatives, organizational streamlining types, defense law review, law review purpose, law review objectives, the Public Law Pilot Program, and cultural change.

  13. Intensive Input in Language Acquisition.

    ERIC Educational Resources Information Center

    Trimino, Andy; Ferguson, Nancy

    This paper discusses the role of input as one of the universals in second language acquisition theory. Considerations include how language instructors can best organize and present input and when certain kinds of input are more important. A self-administered program evaluation exercise using relevant theoretical and methodological contributions…

  14. Bilingualism and Third Language Acquisition.

    ERIC Educational Resources Information Center

    Garate, Jose Valencia; Iragui, Jasone Cenoz

    A study investigated the role of bilingualism (Basque/Spanish) and motivation in third (English) language acquisition in Spain's Basque country. Subjects were 321 secondary school students in two programs, one with instruction primarily in Spanish and one with instruction primarily in Basque. The following independent variables were analyzed in…

  15. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  16. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  17. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  18. Global Arrays Parallel Programming Toolkit

    SciTech Connect

    Nieplocha, Jaroslaw; Krishnan, Manoj Kumar; Palmer, Bruce J.; Tipparaju, Vinod; Harrison, Robert J.; Chavarría-Miranda, Daniel

    2011-01-01

    The two predominant classes of programming models for parallel computing are distributed memory and shared memory. Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this characteristic can have a negative impact on performance and scalability. Careful code restructuring to increase data reuse and replacing fine grain load/stores with block access to shared data can address the problem and yield performance for shared memory that is competitive with message-passing. However, this performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distributed memory models, such as message-passing or one-sided communication, offer performance and scalability but they are difficult to program. The Global Arrays toolkit attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed by the programmer. This management is achieved by calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be specified by the programmer and hence managed. GA is related to the global address space languages such as UPC, Titanium, and, to a lesser extent, Co-Array Fortran. In addition, by providing a set of data-parallel operations, GA is also related to data-parallel languages such as HPF, ZPL, and Data Parallel C. However, the Global Array programming model is implemented as a library that works with most languages used for technical computing and does not rely on compiler technology for achieving

  19. Partial pressure analysis of plasmas

    SciTech Connect

    Dylla, H.F.

    1984-11-01

    The application of partial pressure analysis for plasma diagnostic measurements is reviewed. A comparison is made between the techniques of plasma flux analysis and partial pressure analysis for mass spectrometry of plasmas. Emphasis is given to the application of quadrupole mass spectrometers (QMS). The interface problems associated with the coupling of a QMS to a plasma device are discussed including: differential-pumping requirements, electromagnetic interferences from the plasma environment, the detection of surface-active species, ion source interactions, and calibration procedures. Example measurements are presented from process monitoring of glow discharge plasmas which are useful for cleaning and conditioning vacuum vessels.

  20. Full and partial gauge fixing

    SciTech Connect

    Shirzad, A.

    2007-08-15

    Gauge fixing may be done in different ways. We show that using the chain structure to describe a constrained system enables us to use either a full gauge, in which all gauged degrees of freedom are determined, or a partial gauge, in which some first class constraints remain as subsidiary conditions to be imposed on the solutions of the equations of motion. We also show that the number of constants of motion depends on the level in a constraint chain in which the gauge fixing condition is imposed. The relativistic point particle, electromagnetism, and the Polyakov string are discussed as examples and full or partial gauges are distinguished.

  1. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  2. Apparatus for generating partially coherent radiation

    DOEpatents

    Naulleau, Patrick P.

    2005-02-22

    Techniques for generating partially coherent radiation and particularly for converting effectively coherent radiation from a synchrotron to partially coherent EUV radiation suitable for projection lithography.

  3. Buffer Gas Acquisition and Storage

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F.; Lueck, Dale E.; Jennings, Paul A.; Callahan, Richard A.; Delgado, H. (Technical Monitor)

    2001-01-01

    The acquisition and storage of buffer gases (primarily argon and nitrogen) from the Mars atmosphere provides a valuable resource for blanketing and pressurizing fuel tanks and as a buffer gas for breathing air for manned missions. During the acquisition of carbon dioxide (CO2), whether by sorption bed or cryo-freezer, the accompanying buffer gases build up in the carbon dioxide acquisition system, reduce the flow of CO2 to the bed, and lower system efficiency. It is this build up of buffer gases that provide a convenient source, which must be removed, for efficient capture Of CO2 Removal of this buffer gas barrier greatly improves the charging rate of the CO2 acquisition bed and, thereby, maintains the fuel production rates required for a successful mission. Consequently, the acquisition, purification, and storage of these buffer gases are important goals of ISRU plans. Purity of the buffer gases is a concern e.g., if the CO, freezer operates at 140 K, the composition of the inert gas would be approximately 21 percent CO2, 50 percent nitrogen, and 29 percent argon. Although there are several approaches that could be used, this effort focused on a hollow-fiber membrane (HFM) separation method. This study measured the permeation rates of CO2, nitrogen (ND, and argon (Ar) through a multiple-membrane system and the individual membranes from room temperature to 193K and 10 kpa to 300 kPa. Concentrations were measured with a gas chromatograph that used a thermoconductivity (TCD) detector with helium (He) as the carrier gas. The general trend as the temperature was lowered was for the membranes to become more selective, In addition, the relative permeation rates between the three gases changed with temperature. The end result was to provide design parameters that could be used to separate CO2 from N2 and Ar.

  4. The PARTY parallel runtime system

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Mirchandaney, Ravi; Smith, R. M.; Crowley, Kay; Nicol, D. M.

    1989-01-01

    In the present automated system for the organization of the data and computational operations entailed by parallel problems, in ways that optimize multiprocessor performance, general heuristics for partitioning program data and control are implemented by capturing and manipulating representations of a computation at run time. These heuristics are directed toward the dynamic identification and allocation of concurrent work in computations with irregular computational patterns. An optimized static-workload partitioning is computed for such repetitive-computation pattern problems as the iterative ones employed in scientific computation.

  5. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  6. Parallel Assembly of LIGA Components

    SciTech Connect

    Christenson, T.R.; Feddema, J.T.

    1999-03-04

    In this paper, a prototype robotic workcell for the parallel assembly of LIGA components is described. A Cartesian robot is used to press 386 and 485 micron diameter pins into a LIGA substrate and then place a 3-inch diameter wafer with LIGA gears onto the pins. Upward and downward looking microscopes are used to locate holes in the LIGA substrate, pins to be pressed in the holes, and gears to be placed on the pins. This vision system can locate parts within 3 microns, while the Cartesian manipulator can place the parts within 0.4 microns.

  7. PKDGRAV3: Parallel gravity code

    NASA Astrophysics Data System (ADS)

    Potter, Douglas; Stadel, Joachim

    2016-09-01

    Pkdgrav3 is an 𝒪(N) gravity calculation method; it uses a binary tree algorithm with fifth order fast multipole expansion of the gravitational potential, using cell-cell interactions. Periodic boundaries conditions require very little data movement and allow a high degree of parallelism; the code includes GPU acceleration for all force calculations, leading to a significant speed-up with respect to previous versions (ascl:1305.005). Pkdgrav3 also has a sophisticated time-stepping criterion based on an estimation of the local dynamical time.

  8. True Shear Parallel Plate Viscometer

    NASA Technical Reports Server (NTRS)

    Ethridge, Edwin; Kaukler, William

    2010-01-01

    This viscometer (which can also be used as a rheometer) is designed for use with liquids over a large temperature range. The device consists of horizontally disposed, similarly sized, parallel plates with a precisely known gap. The lower plate is driven laterally with a motor to apply shear to the liquid in the gap. The upper plate is freely suspended from a double-arm pendulum with a sufficiently long radius to reduce height variations during the swing to negligible levels. A sensitive load cell measures the shear force applied by the liquid to the upper plate. Viscosity is measured by taking the ratio of shear stress to shear rate.

  9. Scalable Parallel Algebraic Multigrid Solvers

    SciTech Connect

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  10. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  11. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  12. Exploring Parallel Concordancing in English and Chinese.

    ERIC Educational Resources Information Center

    Lixun, Wang

    2001-01-01

    Investigates the value of computer technology as a medium for the delivery of parallel texts in English and Chinese for language learning. A English-Chinese parallel corpus was created for use in parallel concordancing--a technique that has been developed to respond to the desire to study language in its natural contexts of use. (Author/VWL)

  13. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  14. Reservoir Thermal Recover Simulation on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Li, Baoyan; Ma, Yuanle

    The rapid development of parallel computers has provided a hardware background for massive refine reservoir simulation. However, the lack of parallel reservoir simulation software has blocked the application of parallel computers on reservoir simulation. Although a variety of parallel methods have been studied and applied to black oil, compositional, and chemical model numerical simulations, there has been limited parallel software available for reservoir simulation. Especially, the parallelization study of reservoir thermal recovery simulation has not been fully carried out, because of the complexity of its models and algorithms. The authors make use of the message passing interface (MPI) standard communication library, the domain decomposition method, the block Jacobi iteration algorithm, and the dynamic memory allocation technique to parallelize their serial thermal recovery simulation software NUMSIP, which is being used in petroleum industry in China. The parallel software PNUMSIP was tested on both IBM SP2 and Dawn 1000A distributed-memory parallel computers. The experiment results show that the parallelization of I/O has great effects on the efficiency of parallel software PNUMSIP; the data communication bandwidth is also an important factor, which has an influence on software efficiency. Keywords: domain decomposition method, block Jacobi iteration algorithm, reservoir thermal recovery simulation, distributed-memory parallel computer

  15. Data Acquisition for Modular Biometric Monitoring System

    NASA Technical Reports Server (NTRS)

    Chmiel, Alan J. (Inventor); Humphreys, Bradley T. (Inventor); Grodsinsky, Carlos M. (Inventor)

    2014-01-01

    A modular system for acquiring biometric data includes a plurality of data acquisition modules configured to sample biometric data from at least one respective input channel at a data acquisition rate. A representation of the sampled biometric data is stored in memory of each of the plurality of data acquisition modules. A central control system is in communication with each of the plurality of data acquisition modules through a bus. The central control system is configured to collect data asynchronously, via the bus, from the memory of the plurality of data acquisition modules according to a relative fullness of the memory of the plurality of data acquisition modules.

  16. High-throughput single-molecule fluorescence spectroscopy using parallel detection

    PubMed Central

    Michalet, X.; Colyer, R. A.; Scalia, G.; Kim, T.; Levi, Moran; Aharoni, Daniel; Cheng, Adrian; Guerrieri, F.; Arisaka, Katsushi; Millaud, Jacques; Rech, I.; Resnati, D.; Marangoni, S.; Gulinatti, A.; Ghioni, M.; Tisa, S.; Zappa, F.; Cova, S.; Weiss, S.

    2011-01-01

    Solution-based single-molecule fluorescence spectroscopy is a powerful new experimental approach with applications in all fields of natural sciences. The basic concept of this technique is to excite and collect light from a very small volume (typically femtoliter) and work in a concentration regime resulting in rare burst-like events corresponding to the transit of a single-molecule. Those events are accumulated over time to achieve proper statistical accuracy. Therefore the advantage of extreme sensitivity is somewhat counterbalanced by a very long acquisition time. One way to speed up data acquisition is parallelization. Here we will discuss a general approach to address this issue, using a multispot excitation and detection geometry that can accommodate different types of novel highly-parallel detector arrays. We will illustrate the potential of this approach with fluorescence correlation spectroscopy (FCS) and single-molecule fluorescence measurements obtained with different novel multipixel single-photon counting detectors. PMID:21625288

  17. Eder Acquisition 2007 Habitat Evaluation Procedures Report.

    SciTech Connect

    Ashley, Paul R.

    2008-01-01

    A habitat evaluation procedures (HEP) analysis was conducted on the Eder acquisition in July 2007 to determine how many protection habitat units to credit Bonneville Power Administration (BPA) for providing funds to acquire the project site as partial mitigation for habitat losses associated with construction of Grand Coulee and Chief Joseph Dams. Baseline HEP surveys generated 3,857.64 habitat units or 1.16 HUs per acre. HEP surveys also served to document general habitat conditions. Survey results indicated that the herbaceous plant community lacked forbs species, which may be due to both livestock grazing and the late timing of the surveys. Moreover, the herbaceous plant community lacked structure based on lower than expected visual obstruction readings (VOR); likely a direct result of livestock impacts. In addition, introduced herbaceous vegetation including cultivated pasture grasses, e.g. crested wheatgrass and/or invader species such as cheatgrass and mustard, were present on most areas surveyed. The shrub element within the shrubsteppe cover type was generally a mosaic of moderate to dense shrubby areas interspersed with open grassland communities while the 'steppe' component was almost entirely devoid of shrubs. Riparian shrub and forest areas were somewhat stressed by livestock. Moreover, shrub and tree communities along the lower reaches of Nine Mile Creek suffered from lack of water due to the previous landowners 'piping' water out of the stream channel.

  18. Partially Opened Oven on Phoenix

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This view from the Robotic Arm Camera on NASA's Phoenix Mars Lander shows partial opening of doors to one of the tiny ovens of the Thermal and Evolved-Gas Analyzer.

    Each oven has a pair of spring-loaded doors. Near the center of the image, the partial opening of a pair of doors reveals screen over the opening where a soil sample will be delivered. The door to the right is fully opened and the one to the left is partially deployed. The doors are 10 centimeters (4 inches) long. The opening is 4 centimeters (1.5 inches) wide.

    Tests on the Phoenix testbed at the University of Arizona, Tucson, indicate that a soil sample could be delivered into the oven through the partially opened doors. Engineers are also exploring possibilities for opening the doors more completely.This image was taken during Phoenix's eighth Martian day, or sol (June 2, 2008).

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Covert Reinforcement: A Partial Replication.

    ERIC Educational Resources Information Center

    Ripstra, Constance C.; And Others

    A partial replication of an investigation of the effect of covert reinforcement on a perceptual estimation task is described. The study was extended to include an extinction phase. There were five treatment groups: covert reinforcement, neutral scene reinforcement, noncontingent covert reinforcement, and two control groups. Each subject estimated…

  20. Partially molten magma ocean model

    SciTech Connect

    Shirley, D.N.

    1983-02-15

    The properties of the lunar crust and upper mantle can be explained if the outer 300-400 km of the moon was initially only partially molten rather than fully molten. The top of the partially molten region contained about 20% melt and decreased to 0% at 300-400 km depth. Nuclei of anorthositic crust formed over localized bodies of magma segregated from the partial melt, then grew peripherally until they coverd the moon. Throughout most of its growth period the anorthosite crust floated on a layer of magma a few km thick. The thickness of this layer is regulated by the opposing forces of loss of material by fractional crystallization and addition of magma from the partial melt below. Concentrations of Sr, Eu, and Sm in pristine ferroan anorthosites are found to be consistent with this model, as are trends for the ferroan anorthosites and Mg-rich suites on a diagram of An in plagioclase vs. mg in mafics. Clustering of Eu, Sr, and mg values found among pristine ferroan anorthosites are predicted by this model.